Bloop HTML5 Instrument inspired by Brian Eno’s Bloom from Bocoup on Vimeo.

HTML5 and Javascript Synthesizer from Corban Brook on Vimeo.

Pioneers like Max Mathews’ Bell Labs team taught the computer to hum, sing, and speak, before even the development of primitive graphical user interfaces. So it’s fitting that the standards that chart the Web’s future would again turn to the basics of electronic sound synthesis.

A group of intrepid hackers and Mozilla developers and community leaders are working to make an audio API a standard part of this generation of Web browsers. (Note: not some unspecified future browsers – they’re making it work right now.)

We’ve already seen some pretty amazing experiments with Flash and Java. This would go further, opening buffer-level access to new, faster, just-in-time compiled JavaScript engines. The upshot: you get to code your own synthesizers and real-time audio processing in a way that works right in any browser, on any platform. Standardize the API by which this works, and adding an FM synth to a page could be as easy as assembling a table or inserting a picture.

There’s no plug-in, and thanks to faster JavaScript engines, JavaScript can be the language. To the end user, you just get a Web page that automatically loads the audio goodness.

I’m in touch with the developers, and hope to have a full-blown Q&A session with them. On the agenda: what this is, what it means, how it works, how people can get involved, and how to get started with these early builds. I’m going to start out with some of my own thoughts, though, because I’ve found myself thinking about this a lot. I’ve been a slow convert to the gospel of the browser and JavaScript, but I’m beginning to “get” it, I think. (If I’m off-base or missing something, we’ll get to cover that, too.)

HTML5 3D FFT Visualization with CubicVR from Bocoup on Vimeo.

To understand why this is incredibly cool, though, I think it’s first necessary to understand how incredibly stupid, primitive, and backwards a Web browser is. (I just lost a bunch of Web developers. No offense – there’s a reason it’s that way – but follow with me.)

I’m serious. The Web concept was rooted in an age in which bandwidth and computing restrictions constrained online communication to text. But even as the Web was first catching on, computers themselves had rich multimedia capabilities far exceeding what the browser could do. Today, a lot of Web nuts talk about how the browser could replace desktop applications, or become an “operating system.” But the browser is another application running on your hardware, running on your operating system. The question you might well ask is, why is the browser so limited? Why can’t it do the things the rest of your computer can? The idea that having a tag that specified playing audio or video took until now is kind of silly if you think of it that way, right? (You might ask the inverse question of the “desktop” apps: you do know you’re connected to the Internet, right?)

The idea of the audio API would be to change that, and not only play back sound files, but open up real-time synthesis and processing in standard, accessible-everywhere ways. You can, as you see in the (working, real, not-mock-up) examples, do all kinds of powerful magic. You can visualize music as you play sound files, or perform on instruments right from the browser window.

It’s one thing to talk about some distant future. Fortunately, you don’t have to wait. The code is working right now. You can finish reading this post and then grab a nightly build of Firefox, write a few lines of JavaScript code, and build a synth in the browser.

“Because it’s there” is usually a good enough reason to start hacking. But to musicians, I think there are actual creative benefits, too.

Endless compatibility. The work the Mozilla crowd are doing is already free to download on Mac, Windows, and Linux, stripping platform barriers across desktops, laptops, and netbooks. We’ve heard a lot from certain Mac advocates in particular about how you can only have “first-class” applications if they’re built for a specific OS. That’s fine – depending on the application. But as an artist, at some point I also want some shared tools. If I want to collaborate with someone, they’re what’s first class to me. There’s nothing worse than saying “oh, uh, I guess you have a Mac and I have a PC, so we have to…” It’s creativity-killing. Having browser-based tools on par with the tools outside the browser means we can keep our idiosyncratic tools of choice, but also have a shared set of tools we can access without so much as running an installer, let alone worrying about an OS, processor, or version.

Connectivity and sharing. Being in the browser means instant access to a musical application from anywhere, and instant data for that application. Right now, part of the reason computer musicians have a stigma of staring at computer screens is because the user interfaces we design live on individual machines and are designed to be used only by one person at a time. The connectivity in the browser means it’s easier to build sharing and collaboration directly into a software idea.

Browsers could make your “desktop” apps cooler. One of the myths of browser-based applications I think is the idea that they’ll somehow replace other applications. On the contrary: they could make your existing applications smarter. Unrelated to this particular effort, our friend Andrew Turley built a proof-of-concept application that connects a Web browser as a controller to other apps over OSC. With a little refinement, a free local Web server combined with a browser-based controller app could connect all your traditional music apps to computers in the same room or across the world.

In-browser Synthesizer and Sequencer with Envelope and Filter control from Corban Brook on Vimeo.

The power to make noise – any noise – and a tinkerer’s sunrise. Noise often appeals to hackers (even non-technologist hackers) more than anything else, and that should give you hope. One interpretation of current technology trends runs with the idea that tinkering is in danger, or even on the decline. I think we should be wary of some of those trends; some are simply anti-intellectualism in disguise. I also think tinkering with sound has a bright future. So long as there is raw buffer access somewhere, it’s possible to build something that makes sounds on a level as high as “give me a middle C” or as low level as “I want to invent a new form of synthesis.”

This isn’t just for propellerhead types. With readable code, even those new to programming and sound have an opportunity to start toying with their own experiments. And unlike almost any other medium, sound is both immediate and always satisfying. That is, even if you make some sort of ugly splat, you may still have a good time. That quality makes it perfect for learning and experimentation, whether you’re young or old.

From Babel to common code languages. I’ll also go out on a limb and say there’s potential to get more tools speaking the same language. On the visual side, right now, you can directly copy code from Processing.js (where anyone can easily see it) to a Java-based desktop Processing (where you get higher performance, full-screen and multi-monitor display, hardware access, and the like), often without changing a line of code. The same could happen here. People are already porting Csound examples to this freshly-minted audio API.

Nihilogic’s HTML5 Audio-Data Visualizations from Bocoup on Vimeo.

Open standards, open 3D. By making a standard, too, we have a lingua franca both technologically and in how tools can run. If it were only audio, that’d already be useful. But this extends to other efforts, like the work on WebGL. And WebGL is a good indicator, too: by supporting OpenGL ES 2.0 in the browser, both the “native” or “desktop” app and the “browser” app can share code and capabilities. The same could begin to be true for audio.

Anyway, enough of my third-party sense of what this could mean. Here’s where to go learn more:

David Humphrey is a man you can thank for making this happen. Check out his blog, and read in particular:
Experiments with audio, part IX

May 12 in Boston, there’s a “future of Web audio” event introducing these ideas, if you’re in the area. I’ll see if we can’t get events elsewhere. (This would be ideal for another CDM online global hackday – more so than our previous topic.)

The big post to read:

Alistair MacDonald covers the thinking, the potential applications, the history, and what’s happening now:
Web Audio – All Aboard!

And see:

Alistair sums up why this important:

A web browser that allows for such fine granular control over video graphics using tools like Canvas and WebGL, yet provides no equivelent control over audio data, is a web browser that is very lopsided. In human terms, web browsers have always been very lopsided. They reflect a specialized facet of ‘the human requirement’. This is unfortunate as the web can potentially encompass a far more balanced and expressive set of features, encapsulating our humanity. Fortunately the modern movement towards a more human browser, appears to have gained significant velocity… in the right direction.

Or, if the Muppet Animal were writing this, I think that would go more like:


More HTML5 Goodness

On CDMotion, spectacular 3D graphics, even for the lazy, plus Processing.js resources.

And perhaps more generally useful – especially for working with the 1,000,000 iPads Apple has just sold – Chris Randall has a brilliant and detailed post on hacking the SoundCloud player so it works even when Flash isn’t installed.
Something Wicked This Way Comes…
Or, I should say, by “brilliant,” it points out just how screwed up that particular situation is. So, SoundCloud developers, go read that and report back, okay? (I’ll be in Berlin in three weeks. We can all get some coffees and put together a generic solution that works everywhere. How about that?)

  • Brian Mitchell

    While I applaud Mozilla's efforts and have discussed furthering this work with a few of them, it would be quite misleading to call this a standard. No other vendors have picked this up (a shame). It is still early in the game to call out verdicts but I do predict that this sort of stuff will become more and more common place in our modernized hypermedia engines we like to call browsers.

    If anything will allow the development of a standard, it would be the work of a community on getting vendors aware of the demand for these capabilities. Thanks to CDM and others we will hopefully achieve this goal but it goes a long ways to spread the word around with others. We really do need to modernize hypermedia in the browser to match the capabilities of the new mediums that today's powerful computers allow.

  • Hi Brian – sorry, this is what happens when I post at 1:30 AM and someone's reading. 😉 I had just adjusted that headline.

    You're right: there's no standard yet. But this is also, some of it, mere weeks (or days!) old.

    There's so much to encompass, that here as in OSC I think job one is to get implementations done, and keep iterating those implementations until they start to make sense, and let other people pick up on them… it's a more organic way to make a standard evolve, but some of that hands-on time may be necessary.

    I think there is a chance, though, to share some common ways of doing this, even to have common APIs for accessing other sound environments (Csound? Pd? etc.?)

    On the browser side, this is entirely dependent on other browser projects picking it up. On the other hand, we have the ears of some of those people on this site, so let's get the conversation going, and be really frank about what this is, how it could work, what's good, what's not, what needs to happen – the lot. We need both active hacking and active conversation/thinking, I think.

  • Looking forward to read more about this!

  • Pingback: Create Digital Motion » 3D + Sound, Now in the Browser, and Processing.js()

  • miSmiS

    Nice… can we have JACK [] (& possibly JACKMIDI and OSC) support at least on *nix platforms for that please? 😉

  • great post Peter. Many thanks.

  • dioxide

    This is clever from a programming point of view. But personally I don't care whether it has been done as Canvas/HTML5, Flash or as a DAW/VST. It just doesn't matter to me. People have done some very clever stuff with Flash up to 10 years ago and although it was groundbreaking and some it was good, it wasn't better than the other solutions that were available at the time. In my opinion the same applies here. As a user I think the technology behind the device should be invisible and it already is. So again, it doesn't matter how it is achieved if the outcome is the same, the only thing that matters is how the interface aids controlling or shaping a sound.

    It is clever from a programming perspective. But I'm not a programmer.

  • xkian

    isn't this kind of going backwards? reminds me of the amiga days soundwise.. very clever blah blah blah but the sound/shaping/sequence capabilities are just so obsolete

  • @dioxide: Right, but that's my whole point. You shouldn't have to care. It's not that the browser does something revolutionary, it's that we have two app paradigms – things in the browser, software outside the browser – and what the browser is finally doing is catching up. The step forward from Flash would be if these things were implemented as a standard across browser; not because there's something wrong with Flash, but because then we would have established that it's as important for a browser to allow a synth as it is for it to display a table or an image.

    Many of us are programmers and users, but even programmers think some of the time as users. From the user perspective, that's the question – not doing something because it's there, but allowing functionality because it makes sense, because you could be creative with that functionality.

    As for the synths sounding the same, well, that's unrelated to programming or anything else; the simple fact of the matter is that additive, subtractive, and FM synthesis cover 90% of what's useful for synthesized sound. 🙂

  • chaircrusher

    You know, yesterday I was browsing through the music magazines at a Borders, and today I woke up to read this. It should be said that the quality of writing and the thinking behind it on CDM is head and shoulders above e.g. Future Music. People sometimes give you static about being sort of "Gee Whiz" (including err, me sometimes) but they miss the point that you're writing magazine-article-length posts with actual reporting behind them, and that they're damn good. Carry on, sir!

  • I used to use OSC, PD etc quote a lot. I do disagree with the comment that this stuff is going backwards, it most definitely is a move forwards. (I thought it was cool to go backwards anyway ;)) Bringing tools to the browser to make sound, lowers the barrier for learning and exploration. Its simple, open and free. I WISH I would have had this stuff as a kid. I think if you download it and see how easy it is to do something relatively low-level, you'll have a lot of fun!

    @dioxide, i agree interfaces do make all the difference, that's one of the cool things about this: having simple 3D APIs and the audio data API in the same space with a language as easy as JavaScript is a really powerful environment for creativity. I can invent all kinds of interfaces to use this technology that would be very difficult to make as say.. a VST… and each interface element can share interface-data live from the server, so you music group can work on the same track from different cities… again using the same language, JavaScript.

    Ultimately this is about getting the right people involved, if you have experience working with audio, OSC, PD, Max, synth programming etc, you should get over to the IRC channel (irc:// and help make audio on the web something that is powerful to the digital music community for generations to come.

    Some guy can hack in the morning before his gig, make a new instrument that is an HTML document. He can right click and save it, drag it to his pen drive… turn up to a gig where he uses a totally different computer, with a different web-browser, he can put the pen-drive in, click the HTML file and BOOM.. MUSIC!

    I understand why some people deep in the music tech scene wont "get" the web-win for a while, took me long enough: I worked in music and TV studios mastering audio and writing music for documentaries etc, it took me years to understand the web and see where it is headed and how it applies to audio technology.

    This is about molding the future. Its about bringing an unprecedented level of connectivity and accessibility to audio in a way that will add new ways to collaborate and blow creativity open with simple tools that access low-level sound functions in the OS.

    If the audio community does not get involved in this, we risk making a standard that is not up-to-par. That would be a terrible shame. It is important to transfer our knowledge of audio to the next generation, and there is no better way of doing that than using something as commonly available as the web.

    If the next generations tools are better, freer, young musicians, audio technicians will have a head-start: we have an opportunity to progress the state of audio technology as a whole, by lowering the entry barrier for everyone.

    Its the future baby!

  • I should hasten to add, there's no reason you can't connect these ideas to something like Pd, or even a hardware synth.

    Here's Gijs controlling YouTube video clips with MIDI, using hardware.

  • I am a web designer, and I agree that web browsers are primitive, bit steps taken to standardize things make things for a developer easier, thus, making thing better for the user.

    Being able to work with a high level of sound design that rivals professional programs would be pretty powerful. Thanks for the great post.


  • PS. That last comment was written on my cell phone in a hurry, hence it reads like a 4 year old wrote it. 😛

  • @Ryan: Ha! That's the first time a commenter has had an excuse. 😉 No, I really appreciate the comments. We have tons of readers on CDM who do web design as their day job. Not that you necessarily want to do any MORE of your day job when you get home, but there might be an opportunity to take those skills and have fun with them.

  • Pingback: Sintetizador no Browser? | BIT PRODUCTION!()

  • daniel

    seizure anyone?

  • ash


    i would rather see Ableton move from application to platform provider.

    ableton should package up a plugin with a minimal runtime (no gui, limited feature set) and slap Max for Live on top for scripting capabilities.

    ableton already has the ability for artists to create little audio tools, now give us the ability to monetize them!

  • Finally!

    I've been trying to figure out how to do this ever since I heard about the audio tag.

  • Pingback: Real Sound Synthesis, Now in the Browser; Possible New Standard? | VJ Heaven()

  • Axel

    I quit my web developing job about 7 years ago. Since then I haven't done much in that area, but I'd love to build interfaces for my music applications using html, css and javascript. So, the OSC javascript bridge really excites me.

    On the other hand, working primarily with live audio input, I don't see much use in the sound making aspect without javascript getting access to the file system and audio hardware which it currently hasn't for the simple reason that it would make it easy for developers of malicious code to do serious harm.

    Also, from knowing how these things have worked in the past, I wonder when Apple and Microsoft put out their own respective "standards". Let's see, maybe they have learned from the past.

  • Pingback: View Source as Musical Innovation()

  • I think it might be appropriate to mention SAOL (aka MPEG-4 Structured Audio aka CSound Tweaked) at this juncture. Yeah, that one went down like hotcakes.

  • Cory Flanigan

    At the very least, this is an interesting way to potentially wrap a consistent UI around a set of audio functionality. At most, it's something WAY more promising.

    I have used Ableton, as well as several combinations of open source audio software, and one of the biggest gaps I find with the open source offerings is lack of any kind of consistent UI (as well as some usability concerns too numerous to mention.)

    Imagine a scenario where your "DAW" is a set of tabbed browser windows, and you have a custom skin around some of your favorite audio tools, using jQuery or some similar thing to tie together the disparate components in a way that is most suitable to you. Does that not sound appealing?

    Moreover, with replicable cloud/ground computing (a-la CouchDB/CouchApp), the possibilities for scenarios such as the portable HTML synth that F1LT3R suggested become all the more likely. So, now we can share these configurations and skins, and you can send me your studio setup, and I can give you my live performance setup… To me, that is an exciting proposition!

    I am remiss to dignify the argument that people will use this ability maliciously to get at system level hardware. For me, the millions of open source projects used daily with great success in production environments partly invalidate this concern. The millions of malicious programs that consume countless cycles and bandwidth already, using system level resources, is the coup de gras to that particular objection.

    I am fully in awe and fully in support of what these brilliant innovators are doing, and I am intensely excited to see where it leads!

  • There's a risk of some serious red-herring-ness in this thread. You are NOT going to get a realtime performance synth out of anything embedded in an off-the-shelf web browser, so as neat as sharing HTML5 code and skins might be, its usefulness as a replacement for your favorite existing software instruments is going to be limited.

    This has nothing to do with processor speed, and not much to do with the number of cores either, so don't imagine that either Moore's Law or glomming together more cores is going to solve that.

    Furthermore, any quick look at the history of all existing audio "languages", going all the way back to Music III and all the way forward to Processing (bumping into CSound and SuperCollider on the journey) shows one inexorable common trend: continual expansion of the set of primitives that are considered desirable.

    Y'all might be super excited at getting access to raw audio data and gosh, gee, wow an FFT! right there in yonder browser, but by next week you'll want PVOC, then granular synthesis, then parallel deployment to cores, then object oriented synth design, and so on and so forth.

    So why not step back for a moment and rather than hash up another proto-language that turns out to take 3-8 years catching up to what those "knowledgeable in the field" have been doing for the last decade, consider how to take the lessons learned from (e.g. SuperCollider and Csound) before going any further?

  • Well, Paul, I did say, in so many words, at the outset "One of the myths of browser-based applications I think is the idea that they’ll somehow replace other applications." So I think you should know where I stand personally.

    I agree with absolutely everything you're saying – to a point. But there is a certain advantage to having people not from the field examine the problem. I've been a skeptic of the browser thing, and I have to say, I think my experience in a different mode of thinking caused me to sell the idea short before I really understood what the potential was.

    Processing is a perfect example. I didn't understand Processing.js when it came out. Sure, it runs in the browser – much slower than Java (even now, with the most bleeding-edge JS engine), with no support for Java libraries, with even some "core" Processing functionalities missing. As someone interested in live visuals and installations, the idea of Processing without the ability to do proper full-screen or multiple monitor support just seemed like a deal-killer. And not only that, but because of its dependency on new JS engines and browsers, it was actually *less* compatible than the Java applet support already in Processing.

    But there were two things I missed.

    One, the ability to have a common syntax meant that this would never become an either/or choice. You're absolutely right about the next step here, I think, which would be to leverage existing tools and languages rather than invent another wheel. To take Processing.js as an example, you can copy and paste code between the Processing IDE (in Java) and a browser-based IDE and have them both work.

    Second, I missed how the browser, for all of the things it took out, could add other functionality that the standalone app lacked. It becomes a different application. Again, with Processing as the example, the Java IDE and Eclipse and so on are really useful for individual development, but they're weaker for collaboration. So the ability to do some of the same basic things in prototyping in both contexts matters. The applet, meanwhile, was compatible, but it couldn't integrate with a Web layout. It's a kludge, in other words. It's an app pretending to be part of a webpage when it isn't. A webpage should be a webpage, and that means if it does implement richer features, it desperately needs to be part of the Web framework, even if that requires sacrifices (or, perhaps, *because* that requires sacrifices).

    I think this second point has numerous uses. It's not going to make a good pitch to the browser developers, but imagine the browser as a tool for interactive synthesis textbooks, learning, or collaborative sound projects. You could hash out the basics in the browser and do the heavy lifting in a different tool.

    I take the "feature creep" idea a little differently. I think we've failed in the audio world to make some tough choices and prioritize what is more important from what is less. In Processing, that meant reducing the core language to a small number of commands. You can still extend it with libraries, but it forces the developer of the tool and the user to focus on essential elements and building blocks. Or to take another example, the LEGO building system is at its best when it uses modular, interchangeable blocks that are application-agnostic, and at its worst when it starts adding in lots of specific, bizarre widgets for jobs that didn't fit. It's at its best when treated as a prototyping tool that makes compromises, and at its worst when it tries to be all things to all people.

    Languages like Csound and SuperCollider (and Max/Pd in the patching language) are useful, mature tools, but I think they have too many capabilities and too many different conceptual levels contained in them to function as building blocks in that way, or to indicate to a user what is most important, what the "core" functionality is.

    You'll see the Mozilla team introduce themselves as incorporating some people new to these areas (though I think some of them are "knowledgeable in the field," too). Whatever they background, there's a different perspective emerging here, and I think it's one that could be valuable. Obviously, the future hype at the moment is a bit extreme, amidst the "end of print," "end of keyboards," "end of computers," "end of file systems" and everything moving to the "cloud." There is a sense that people are in some hurry to just declare everything obsolete.

    But, at the same time, that's exactly the perspective I want from the people building the browser. I want them to live, eat, and breathe the browser, and make the best argument for a browser possible, and build the best possible browser. Then I, as a user (or developer) can sort out what I want to use the tool for. Ditto the JavaScript crowd – the fact that they're so dedicated has been part of what's helped them make JavaScript better. I'm not sure if I'm ever going to run Chrome OS, but it might mean my browser is more capable.

    We don't need another proto-language, no. But I think we could use some new ideas.

  • Cory Flanigan

    The limitations of the browser versus a native application are not unbeknownst to me.

    Have you considered the implications of frameworks like node.js that provide concurrency and asynchronous evented server side functionality against the browser limitations, e.g. in the context of distributing workload across multiple cores?

    Could it be possible for such frameworks to wrap existing libraries that have 'been there' and learned the hard lessons over all of these years?

    Is this perhaps not some 'brave new world', rather an expansion upon the amazing work that has already been done by so many talented people?

    Paul, I have a lot of respect for you, and I fully agree that anyone who hails something like this as a 'new messiah of digital audio' is guilty of the most loathsome form of sensationalism. Especially because in all likelihood, they will be on to the next thing that the cool kids are into. As well, your suggestion to learn from the lessons of those who have gone before resonates soundly with me.

    My personal hope for such things as this, is to gain enough understanding to incorporate the best of what each of these approaches offers with regard to the things I am trying to accomplish, and do something unique with it. After all, isn't that why we all got started making music in the first place?

  • Peter, I think you're missing what people see as exciting about this kind of in-browser implementation, even though you've cited the magic phrase yourself before: "write once, run anywhere". This isn't about what is a webpage and what isn't – its about considering the browser as a development platform that is entirely self-contained and (theoretically) completely portable (i.e. if your design runs on an XXX-conformant browser, it runs anywhere that there is an XXX-conformant browser).

    That's all well and good, but lets face up to what the browser actually is when used like this: its a virtual machine. And just as almost nobody does "real audio" on virtual machines at the moment, they're not going to find that the VM offered by an arbitrarily extended browser is a suitable platform.

    Does this mean that you can't create some cool algorithmic processing engines (ala Bloom, which is wonderful, by the way) and get it running on an appropriately conformant browser? Of course not – its going to be cool.
    And this of course is the promise of stuff like Processing.

    But this is not the same as an environment like Max or Pd or SuperCollider or CSound (or even Reaktor), and I firmly believe that people who are imagining that you're going to get awesome new filters running in your browser that will substitute for your Waves plugins are in for a bit of a disappointment. The same is true for those imagining that you'll be able to plug a MIDI keyboard or a Monome into your browser and rock out with your friend's awesome Javascript polyphonic synth. Generate sound? absolutely. Process audio? a little. Do what most people are using Live/Reason/Max/Sonar/Buzz/FruityLoops etc. etc. for right now? Call me a pessimist, but I don't think so.

    Do we need more algorithmic composition environments? Well, sure, I guess basing it around technologies (HTML, JS) that a newer generation of hackers are passingly familiar with could be interesting, and perhaps a productive break from CommonMusic, CSound and the usual academic standards. But I'm not sure that the enthusiasm for this idea comes from that direction.

  • Right, but Paul, this is coming from basically a handful of hackers. They've said *everywhere* in *everything* they've written, they're looking for musicians to set the direction this should go. And the people running with this right now are coming from the Processing.js community, so that's their perspective. No one is talking about replacing Ableton. If this is just a waste of time, then it should probably be safely ignored. But if there is value in it, then maybe it's worth making exactly the points you're making – and figuring out how to make the sort of universe of Csound and the universe of browser synthesis play nice together. They each do some things they other can't. They're each different beasts. But they could learn something from each other. (And, as an aside, I have no doubt that someone will use those native tools from Google to get Csound working in Chrome – all of Csound.)

  • Peter is right. The browser will catch up. Google understands this and we will see a lot of unexpected applications pop up inside of browsers in the next year or two for sure. The browser is quite a restrictive frame for software, but so is an iPhone. What is interesting is not what it can't do, but what it can do — be ubiquitous, free, interoperate readily with social media, etc. It is a boon for net artist/composers.

  • bb

    After reading the article and comments I still don't get it. But I'm old.
    I'm seeing solutions for problems that don't exist, and shiny distractions from actually making music.

  • @bb: if you feel that way about these, I don't think you'd like the similar experiments in computer music that have been happening, literally, since the 1970s, when the first networked musical ensembles and applications began appearing. And that's fine.

    But it's just a new set of tools for doing that. I think for the first time, the main networking application and platform – the browser – is up to the task. That's not earth-shaking news, but it means for people interested in those applications, they have a better tool. That's all. But that's good.

  • Also, come on. Even though I (cough) have been known to joke about this myself, I don't think you can just dismiss the browser because the "kids" are into it. You can meet the developers. It's not an age thing. The browser is now 18 years old, and as I said, these sorts of applications for music date back at least to the 1970s. Ideas around networked music applications aren't as old, at least in implemented form, as computer music, but they predate even MIDI by over half a decade.

  • If you want a cluttered, confused argument divorced from the realities of how the technology works, try this on for size:

    I really like Tim O'Reilly a lot, so I don't mean to rip on the guy, but yes, that argument seems made up. It's a conflation of … well, almost *every* conceivable issue into technology into one mess that becomes meaningless, and is filled with made-up facts like "developers don't write device drivers any more" (which isn't true – I WISH class-compliant devices were the norm, but that still requires writing class support in the firmware).

    My name is not, in fact, Tim O'Reilly, so I take no credit or ownership over that article. I think Tim, with only good intentions, has created the essay version of this:

    So, that's not what I'm saying. But break off the browser as one delivery platform and set of development tools among others, and I think you'll find something that's useful. I'm effectively arguing for the advantages of using that for certain kinds of applications. And as I say above, part of the point would be to use the browser as a layer that talks to all these useful desktop applications, which are useful for music for all the reasons Paul and others (and I) are citing.

  • Pingback: WebGL around the net, 6 May 2010 | Learning WebGL()

  • Pingback: More online music applications coming | Opensource Geek()

  • Pingback: Create Digital Music » Music Notation with HTML5 Canvas in the Browser; Standard Formats for Scores()

  • Pingback: Create Digital Music » More Browser Notation: Type Notes Quickly, Store Scores Online()

  • Pingback: Create Digital Music » Browser Madness: 3D Music Mountainscapes, Web-Based Pd Patching()

  • Pingback: Browser Madness: 3D Music Mountainscapes, Web-Based Pd Patching | VJ Heaven()