Slowly but surely, the web audio API creeps toward being something that’s usable in more than one browser at a time. In the meantime, we get a glimpse of how generative music could be a part of what’s to come. It’s a long way from those horrid, looping audio files that plagued the Web in its heady 1990s adolescence.
Today on Create Digital Motion, I look at the aesthetics of crowd-sourcing in work by Aaron Koblin and Chris Milk – and how the view of the significance of the crowd has changed over time. Substitute “music” for “motion,” and you’ll get a similar argument about what crowds might do with sound.
But it’s worth noting the musical elements that form part of that experience. The tools are high-level, but thanks to the audio API and browser interactivity, it’s possible for users to shape the musical landscape that accompanies some of the animations. (You’ll only see the interface at top if you click an animation that has music; the others lack the tool.) In the behind-the-scenes videos, some of Google’s (and digital media’s) smartest discuss how the plumbing fit in with the art.
Also this week, our friend TheAlphaNerd has been building tools for generating your own keyboards in browser windows. Here, the applications are broad – you could build interactive learning tools for music theory and tuning, for instance, or a means for forum participants to communicate ideas through musical sketches and not just text. All the code is open source, so it’s a great place to start learning about how this stuff is done, trying some handy libraries that make your life easier, and perhaps experimenting with what online interfaces could be.
And good things are coming. (so, if you can dig in and help and make this happen…)
It’s getting to be about time to do a full review of how HTML5 and the Web are getting on with sound, but that will have to wait for another day. In the meantime, if you’ve seen compelling examples – or have questions from a development or user perspective – let us know.