Alexander Chen transforms the steady pulse of the (actual) New York City subway system into gentle, generative string plucks in his new interactive piece “Conductor.” The visual effect as well as the musical one is mesmerizing, as the subway is viewed in the abstract, sparse geometries of designed Massimo Vignelli’s 1972 diagram.

New York subway nerds and long-time residents will note that the schedule itself is from 1972, hence the appearance of the K train and the elevated along Third Avenue (the 8), one I imagine we wish we still had.

The work is also a glimpse of the Web as a canvas (figurative and literal) for this kind of work – your browser as your very own virtual chamber music setting. And it’s a window into some of the challenges (cough, buggy audio implementations!) to making that happen.

Built in HTML5’s Canvas element with SVG vector data and JavaScript, the application must rely on Flash as a back end for audio delivery, though via a very cool JavaScript tool, SoundManager (which also supports HTML5 audio if its implementation improves). There’s also some use of open source sounds of string plucks, via the freesound project.

Important as the technical details are, though, I find what Alexander says about the inspiration for music made from subways to be the most compelling.

He shares with CDM some insight into the process, technical and artistic.

How did this project come about? What made you decide to translate subway schedules into music?

I’ve been kind of interested in turning everyday things into music. I did a project in 2003 called Sonata for the Unaware, where I used security-cam style footage of commuters and generated music from that.

This project sort of started last September when my friend David Lu ( and I were having a conversation about an idea he
had for an illustrated string instrument, where drawn lines turn into plucked strings. This turned into a project (which is still in progress) called Crayong. So I had written code for that. As a violist, I really wanted to duplicate the feel of grabbing and pulling a string, how there’s more tension near the pinned points.

Once I had that string code, I started brainstorming other things I could do with it. My wife and I started talking about a subway map that you could strum. My friend owns a print of the 1972 Vignelli map, which is really beautiful.

I liked the idea of the trains being the performers. And with all of the realtime location-sensitive information we can get now, I thought about a website that starts off feeling realtime, but then time starts unraveling.

A design artifact from another time, Massimo Vignelli’s landmark subway map design from 1972 remains in poor repair in a modern subway station here in New York. It almost looks like a graphical score – and now, with some creative code, it is. Photo (CC-BY) Michael Cory.

How it was put together — good notes on your site, but want to share any tips that you learned in the process? You had to give up on HTML5 audio, it seems; was that in all browsers or just some of them? With Flash for sound and Canvas for visuals, seems the results are at least largely compatible, yes?

I’m excited about HTML5. The graphics went pretty flawlessly, but unfortunately there definitely were limitations in the audio layering. There’s an in-detail post at my site:

Limitations of layering HTML5 Audio

I ran into problems layering multi-shot triggers of the same sample. It could layer a handful of sounds (seemed to cap off around 8), but would increase load time unnecessarily. This was at least happening in Safari, where I could see the HTTP requests. I tried some workarounds but every approach had its trade-offs.

So all in all, I think Flash still performs better for the audio portion of these types of experiments. But I’m hoping that will change, as it would be nice to not rely on any plugins.

For projects where I am triggering say, 30+ samples, I often compile them into one audio file and manually store the start times of each sample in the code. Seems to load faster overall, because each HTTP request has some overhead. (But I didn’t have to do that here, because I only had 20 notes.)

I also think it’s nice to work with technical limitations. For example, Flash has a limit of how many sounds can be simultaneously layered. Instead of trying massive code fixes, I decided to simply use samples with shorter sustain. That’s why I ended up going with cello pizzicato instead of say, a sustained harp. The samples are from the, recorded by user corsica_s.

Can you tell us a little bit about yourself?

About me – Besides doing interactive work, I’ve released three albums as Boy in Static and one as The Consulate General. I’ve toured on-and-off the past
few years, usually performing on viola and vocals. I’m currently working at Google Creative Lab in New York.

Besides various new art and technology projects I see everyday, my wife and I recently found a DVD of Al Jarnow’s stop animation from the 80’s. Incredible mathematical grid-based animation experiments done by hand, frame by frame.

More on Alexander:


His day job is at the Google Creative Lab.