Bicycle Built for Two Thousand from Aaron on Vimeo.
The song “Daisy Bell” has a special place in computer history. Max Mathews, who had by the late 50s pioneered digital synthesis using IBM 704 mainframe, arranged the tune in 1961 for vocoder-derived vocal synthesis technology on technology developed by John Larry Kelly, Jr.. Kelly himself is better known for applying number theory to investing in the markets — an unfortunate achievement in the wake of a financial collapse brought down by misuse of mathematical theory.
In 1962, Arthur C. Clarke happened to hear the 704 singing the Mathews/Kelly “Daisy Bell,” and the rest is (fictional) history – the HAL computer in the book and movie sings the song as he is being disconnected, as though the computer had learned this song as a “child.”
Here’s Max himself (namesake for Max, the patching language), overseeing a rendition of his arrangement:
Today, basic vocal synthesis has become part of the fabric of taken-for-granted tech, and the legendary rendition by a singing robotic voice part of our culture. These things are no longer futuristic or strange. Apple this week even launched a music player that announces its own tracks in the form of the new iPod shuffle.
But what happens when those same human beings imitate the computer? That’s the question asked by artists Aaron Koblin and Daniel Massey, crowdsourcing human input by inviting thousands of participants to contribute their voice using custom recording software built in Processing. The basic technique is something Koblin has used before: his Sheep Market massed an Internet labor market, paid two cents on Amazon’s Mechanical Turk, to draw walls full of thousands of sheep. Those sheep proved at once massive in quantity and unique in individual quality, and, if you squinted at them, presented a critique of global labor practice.
Koblin has also done various seminal pieces with the Processing coding language that change our perception of data and technology, like his now oft-cited “Flight Patterns,” tracing the paths of overhead planes.
This time, the computer/human relationship is truly inverted. Each singer participant imitates a sound component from the robot singing. The humans are then combined to synthesize the robot sound instead of the other way around. The result: organic technology combined into a cyborg, online chorus. No one singer knows what it is they’re singing in whole. It’s perhaps the first mass-human synthesis of sound, and the results are truly unusual.
And strange synthesis seems to be what Koblin’s work is fundamentally about. Perhaps it’s not Mathews’ sound experiments, but Kelly’s ideas about quantifying global markets that are most relevant. (For an extra dose of irony, Google HAL – you’ll get stock ticker HAL, for Haliburton, one of the few stocks that has grown in this economy.) In our reality, the University of Illinois didn’t create a super-smart, spaceship-controlling robotic brain – but they did create the Web browser.
And after all, all of us are now living in the aftermath of many crowds of people behaving collectively without genuine larger knowledge of what they were doing. Robots were envisioned at the beginning of the 20th Century as out-of-control automatons, crushing civilization, and were often then appropriated as metaphors for fascist government. Now, the vision can be equally apocalyptic, but the meaning is inverted. It’s human beings acting as automatons – without contact with human scale – that threaten to crush the Earth. And this time, they’re capitalists.
On the other hand, the beauty of art is its ability to mean many things at once. Koblin’s sheep and now his singers never cease to be whimsical. And in their beauty, they suggest that perhaps even massed crowds of Internet-connected people can sing in harmony.
For the future of humanity, I hope so. But then, if we fail, we’ll always have the robots.
“Just what do you think you’re doing, Dave?”