Just days into Radiohead’s experiment with providing data and code for a visualized music video, fan responses are already starting to appear. I’m not sure just how much of Thom Yorke’s face people will want, but the first results do look impressive – and indicate the talent and skill around the world, waiting to be discovered. If there’s any question of the merit of putting the code and data out on open source, this should answer that; it seems the video may well be more than just a gimmick.
Here’s a nice deconstruction below, found in a post at GreatDance’s “The Kinetic Interface” blog. (Could be a good blog to watch if, like me, you’re interested in the meeting place of dance and technology.) It’s the work of “j4mie” (Jamie Matthews), who has a couple of experiments going and more at his personal site. I enjoy seeing these things come together.
I’m a huge fan of Processing, but there’s no reason you have to use that tool exclusively – data is data. Peter Eschler writes via CDM comments that he and Michael Zoellner have ported the data to the real-time X3D / instantreality platform, as a system of particles. That means, in short, you can put Thom’s face up on interactive walls and poke him in the cheek and make his face disintegrate. (And to think, some people doubted this would revolutionize the fan/artist relationship.)
They call the results, shooting and melting his face, “Atomizing Thom.” To translate the data, they had to write a quick Python script that could reformat the CSV data in something X3D could work with. Full documentation on Peter’s and Michael’s sites:
I’ve been meaning to familiarize myself more with this platform, so perhaps this will provide an excuse. Here’s one sketch below:
Back to Processing, though, none other than original Processing co-creator Ben Fry weighs in with some thoughts on the
project and the ins and outs of the code written by music video Director of Technology Aaron Koblin.
Parsing Numbers by the Bushel [writing | Ben Fry]
In the latter post, you’ll find Ben delving deep into the particulars of how code is parsed in Processing – very useful if you’re working on your own data visualization code. Here’s my short translation: you can cast an entire String array, not just an individual String. That’s something that comes up quite often, so I may have some additional examples of this soon if that doesn’t make sense, ye Processing coders.
I’ll be talking to Aaron later this week; stay tuned.
And if you work up a sketch with the House of Cards data — rough or polished — we’d love to have the scoop here on CDM, so let us know.
exiledsurfer points to Processing and OpenFrameworks templates for interpreting the Radiohead video code. The OFW code is only partially finished; to me, Processing should be easier to work with, but of course if you’re already working with OFW you may want to go with that environment.