Bloop HTML5 Instrument inspired by Brian Eno’s Bloom from Bocoup on Vimeo.
HTML5 and Javascript Synthesizer from Corban Brook on Vimeo.
Pioneers like Max Mathews’ Bell Labs team taught the computer to hum, sing, and speak, before even the development of primitive graphical user interfaces. So it’s fitting that the standards that chart the Web’s future would again turn to the basics of electronic sound synthesis.
A group of intrepid hackers and Mozilla developers and community leaders are working to make an audio API a standard part of this generation of Web browsers. (Note: not some unspecified future browsers – they’re making it work right now.)
We’ve already seen some pretty amazing experiments with Flash and Java. This would go further, opening buffer-level access to new, faster, just-in-time compiled JavaScript engines. The upshot: you get to code your own synthesizers and real-time audio processing in a way that works right in any browser, on any platform. Standardize the API by which this works, and adding an FM synth to a page could be as easy as assembling a table or inserting a picture.
There’s no plug-in, and thanks to faster JavaScript engines, JavaScript can be the language. To the end user, you just get a Web page that automatically loads the audio goodness.
I’m in touch with the developers, and hope to have a full-blown Q&A session with them. On the agenda: what this is, what it means, how it works, how people can get involved, and how to get started with these early builds. I’m going to start out with some of my own thoughts, though, because I’ve found myself thinking about this a lot. I’ve been a slow convert to the gospel of the browser and JavaScript, but I’m beginning to “get” it, I think. (If I’m off-base or missing something, we’ll get to cover that, too.)
HTML5 3D FFT Visualization with CubicVR from Bocoup on Vimeo.
To understand why this is incredibly cool, though, I think it’s first necessary to understand how incredibly stupid, primitive, and backwards a Web browser is. (I just lost a bunch of Web developers. No offense – there’s a reason it’s that way – but follow with me.)
I’m serious. The Web concept was rooted in an age in which bandwidth and computing restrictions constrained online communication to text. But even as the Web was first catching on, computers themselves had rich multimedia capabilities far exceeding what the browser could do. Today, a lot of Web nuts talk about how the browser could replace desktop applications, or become an “operating system.” But the browser is another application running on your hardware, running on your operating system. The question you might well ask is, why is the browser so limited? Why can’t it do the things the rest of your computer can? The idea that having a tag that specified playing audio or video took until now is kind of silly if you think of it that way, right? (You might ask the inverse question of the “desktop” apps: you do know you’re connected to the Internet, right?)
The idea of the audio API would be to change that, and not only play back sound files, but open up real-time synthesis and processing in standard, accessible-everywhere ways. You can, as you see in the (working, real, not-mock-up) examples, do all kinds of powerful magic. You can visualize music as you play sound files, or perform on instruments right from the browser window.
It’s one thing to talk about some distant future. Fortunately, you don’t have to wait. The code is working right now. You can finish reading this post and then grab a nightly build of Firefox, write a few lines of JavaScript code, and build a synth in the browser.
“Because it’s there” is usually a good enough reason to start hacking. But to musicians, I think there are actual creative benefits, too.
Endless compatibility. The work the Mozilla crowd are doing is already free to download on Mac, Windows, and Linux, stripping platform barriers across desktops, laptops, and netbooks. We’ve heard a lot from certain Mac advocates in particular about how you can only have “first-class” applications if they’re built for a specific OS. That’s fine – depending on the application. But as an artist, at some point I also want some shared tools. If I want to collaborate with someone, they’re what’s first class to me. There’s nothing worse than saying “oh, uh, I guess you have a Mac and I have a PC, so we have to…” It’s creativity-killing. Having browser-based tools on par with the tools outside the browser means we can keep our idiosyncratic tools of choice, but also have a shared set of tools we can access without so much as running an installer, let alone worrying about an OS, processor, or version.
Connectivity and sharing. Being in the browser means instant access to a musical application from anywhere, and instant data for that application. Right now, part of the reason computer musicians have a stigma of staring at computer screens is because the user interfaces we design live on individual machines and are designed to be used only by one person at a time. The connectivity in the browser means it’s easier to build sharing and collaboration directly into a software idea.
Browsers could make your “desktop” apps cooler. One of the myths of browser-based applications I think is the idea that they’ll somehow replace other applications. On the contrary: they could make your existing applications smarter. Unrelated to this particular effort, our friend Andrew Turley built a proof-of-concept application that connects a Web browser as a controller to other apps over OSC. With a little refinement, a free local Web server combined with a browser-based controller app could connect all your traditional music apps to computers in the same room or across the world.
In-browser Synthesizer and Sequencer with Envelope and Filter control from Corban Brook on Vimeo.
The power to make noise – any noise – and a tinkerer’s sunrise. Noise often appeals to hackers (even non-technologist hackers) more than anything else, and that should give you hope. One interpretation of current technology trends runs with the idea that tinkering is in danger, or even on the decline. I think we should be wary of some of those trends; some are simply anti-intellectualism in disguise. I also think tinkering with sound has a bright future. So long as there is raw buffer access somewhere, it’s possible to build something that makes sounds on a level as high as “give me a middle C” or as low level as “I want to invent a new form of synthesis.”
This isn’t just for propellerhead types. With readable code, even those new to programming and sound have an opportunity to start toying with their own experiments. And unlike almost any other medium, sound is both immediate and always satisfying. That is, even if you make some sort of ugly splat, you may still have a good time. That quality makes it perfect for learning and experimentation, whether you’re young or old.
From Babel to common code languages. I’ll also go out on a limb and say there’s potential to get more tools speaking the same language. On the visual side, right now, you can directly copy code from Processing.js (where anyone can easily see it) to a Java-based desktop Processing (where you get higher performance, full-screen and multi-monitor display, hardware access, and the like), often without changing a line of code. The same could happen here. People are already porting Csound examples to this freshly-minted audio API.
Nihilogic’s HTML5 Audio-Data Visualizations from Bocoup on Vimeo.
Open standards, open 3D. By making a standard, too, we have a lingua franca both technologically and in how tools can run. If it were only audio, that’d already be useful. But this extends to other efforts, like the work on WebGL. And WebGL is a good indicator, too: by supporting OpenGL ES 2.0 in the browser, both the “native” or “desktop” app and the “browser” app can share code and capabilities. The same could begin to be true for audio.
Anyway, enough of my third-party sense of what this could mean. Here’s where to go learn more:
David Humphrey is a man you can thank for making this happen. Check out his blog, and read in particular:
Experiments with audio, part IX
May 12 in Boston, there’s a “future of Web audio” event introducing these ideas, if you’re in the area. I’ll see if we can’t get events elsewhere. (This would be ideal for another CDM online global hackday – more so than our previous topic.)
The big post to read:
Alistair MacDonald covers the thinking, the potential applications, the history, and what’s happening now:
Web Audio – All Aboard!
And see:
http://wiki.mozilla.org/Audio_Data_API
Alistair sums up why this important:
A web browser that allows for such fine granular control over video graphics using tools like Canvas and WebGL, yet provides no equivelent control over audio data, is a web browser that is very lopsided. In human terms, web browsers have always been very lopsided. They reflect a specialized facet of ‘the human requirement’. This is unfortunate as the web can potentially encompass a far more balanced and expressive set of features, encapsulating our humanity. Fortunately the modern movement towards a more human browser, appears to have gained significant velocity… in the right direction.
Or, if the Muppet Animal were writing this, I think that would go more like:
NOISE…. MAKE NOISE. LOUD NOISE. MAKE LOUD NOISE.
More HTML5 Goodness
On CDMotion, spectacular 3D graphics, even for the lazy, plus Processing.js resources.
And perhaps more generally useful – especially for working with the 1,000,000 iPads Apple has just sold – Chris Randall has a brilliant and detailed post on hacking the SoundCloud player so it works even when Flash isn’t installed.
Something Wicked This Way Comes…
Or, I should say, by “brilliant,” it points out just how screwed up that particular situation is. So, SoundCloud developers, go read that and report back, okay? (I’ll be in Berlin in three weeks. We can all get some coffees and put together a generic solution that works everywhere. How about that?)