envelopimage

3D, spatialized sound is some part of the future of listening – both privately and in public performances. But the question is, how?

Right now, there are various competing formats, most of them proprietary in some way. There are cinema formats (hello, Dolby), meant mainly for theaters. There are research installations, such as those in Germany (TU Berlin, Frauenhofer) and Switzerland (ZHDK), to name a few. And then there are specific environments like the 4DSOUND installation I performed on and on which CDM hosted an intensive weekend hacklab – beautiful, but only in one place in the world, and served up with a proprietary secret sauce. (4DSOUND has, to my knowledge, two systems, but one is privately-owned and not presenting work, and spatial positioning information is stored in a format that for now is read only by 4DSOUND’s system of Max patches.)

Now, we see a different approach: crowd funding to create a space, and opening up tools in Ableton Live, Max for Live, and Lemur. The result looks quite similar to 4DSOUND’s approach in the speaker configuration and tooling, but with a different approach to how people access those tools and how the project is funded.

Artist Christopher Willits has teamed up with two sound engineers / DSP scientists and someone researching the impact on the body to produce ENVELOP – basically, a venue/club for performances and research. It, too, will be in just one place, but they’re promising to open the tools used to make it, as well as use a standard format for positioning data (Ambisonics). We’ll see whether that’s sufficient to make this delivery more widely used.

The speaker diffusion system is relatively straightforward for this kind of advanced spatial sound. You get a sphere of speakers to produce the immersive effect – 28 in total, plus 4 positioned subwoofers. (A common misconception is that bass sound isn’t spatialized; in fact, I’ve heard researchers demonstrate that you can hear low frequencies as well as high frequencies.) Like the 4DSOUND project (and, incidentally, less like some competing systems), the speaker install is built into a set of columns.

And while the crowd-funding project is largely to finish building the physical venue, the goal is wider. They want to not only create the system, but they say they want to host workshops, hackathons, and courses in immersive audio, as well.

You can watch the intro video:

The key technical difference between ENVELOP and the 4DSOUND system is that ENVELOP is built around Ambisonics. The idea with this approach, in theory, at least, is that sound designers and composers choose coordinates once and then can adapt a work to different speaker installations. An article on Ambisonics is probably a worthy topic for CDM (some time after I’ve recovered from Musikmesse, please), but here’s what the ENVELOP folks have to say:

With Ambisonics, artists determine a virtual location in space where they want to place a sound source, and the source is then rendered within a spherical array of speakers. Ambisonics is a coordinate based mapping system; rather than positioning sounds to different locations around the room based on speaker locations (as with conventional surround sound techniques), sounds are digitally mapped to different locations using x,y,z coordinates. All the speakers then work simultaneously to position and move sound around the listener from any direction – above, below, and even between speakers.

ambilive

aura3dj

One of our hackers at the 4DSOUND day did try “porting” a multichannel ambisonic recording to 4DSOUND with some success, I might add. But 4DSOUND’s own spatialization system uses its own coordinate system, which can be expressed in Open Sound Control messages.

The ENVELOP project is “open source” – but it’s based on proprietary tools. That includes some powerful-looking panners built in Max for Live which I would have loved to have whilst working on 4DSOUND. But it also means that the system isn’t really “open source” – I’d be interested to know how you’d interact, say, with genuinely open tools like Pure Data and SuperCollider. (For instance, presumably you might be able to just plug in a machine running Pd and HOALibrary, a free and excellent tool for ambisonics?) That’s not just a philosophical question; the workflow is different if you build tools that interface directly with a spatial system.

It seems open to other possibilities, at least – with CCRMA and Stanford nearby, as well as the headquarters of Cycling ’74 (no word from Dolby, who are also in the area), the brainpower is certainly in the neighborhood.

Of course, the scene around spatial audio is hardly centered exclusively on the Bay Area. So I’d be really interested to put together a virtual panel discussion with some competing players here – 4DSOUND being one obvious choice, alongside Fraunhofer Institute and some of the German research institutions, and… well, the list goes on. I imagine some of those folks are raising their hands and shouting objections, as there are strong opinions here about what works and what doesn’t.

As noted in comments, there are other open source project – the ZHDK tools for ambisonics are completely open, and don’t require any proprietary tools in order to run them. You will need a speaker system, which remains the main stumbling block.

ambilive

auramix

If you’re interested in a discussion of this scene, let us know. Believe me, I’m not a partisan of any one system – I’m keen to see different ideas play out.

ENVELOP – 3D Sound [Kickstarter]

For background, here’s a look at some of the “hacking” we did of spatial audio in Amsterdam at ADE in the fall. Part of our idea was really that hands-on experimentation with artists could lead to new ideas – and I was overwhelmed with the results.

4DSOUND Spatial Sound Hack Lab at ADE 2014 from FIBER on Vimeo.

And for more on spatial audio research, this site is invaluable – including the various conferences now held round the world on the topic:

http://spatialaudio.net/
http://spatialaudio.net/conferences/

  • Samuel

    The Institute for Computer Music and Sound Technology (https://www.zhdk.ch/?id=65399) in Zรผrich is also worth mentioning. They are providing open source tools and max externals for ambisonics for years. They also have their lab-room equipped with more than 20 speakers. It’s not for public events, but if you like to use it for your project get in contact with them. They are very friendly and curious about new ideas.

    One tool is a basic audio sequencer for ambisonics (https://www.klangfreund.com/choreographer/ ). I have created its engine.

    • Ah, nice use of a timeline! ๐Ÿ™‚ Looks great; I’ll have a look.

      And yes, I’ve actually fiddled with those ZHDK tools.

      I suppose what I should make clear is that the open source side of this project is not new.

      At the same time, there’s clearly some threshold that has yet to be crossed as far as people widely creating and dispersing this kind of material. Now, I’m not certain yet what argument this project makes for changing that over other projects; this is something to follow up on…

      Of course, just talking about some of these other tools might also have an impact; it’s a topic I’ve been meaning to cover.

      • Thanks for your interest!

        The Choreographer is in beta state. OS X only. And not actively developed anymore. Please keep that in mind if you give it a try.

        Some hints to get started:
        – A movement (trajectory) is an own kind of item which can be applied to one or more audio regions.
        – Only mono files are supported. Drag and drop them from the Finder to the timeline.
        – Can’t draw your trajectory? Be sure the trajectory (and not the audio region) is selected. Command + click creates new points in the radar editor (If it is a breakpoint-trajectory).
        – You can group audio regions to move them together on the timeline.
        – You can export your creation as multi channel wave file.
        – Take a look at the ‘Engine -> Speaker Setup’ Dialog for routing. You can send pink noise to figure out which speaker is connected to which port of your audio-interface… (can happen with a lot a speakers)

  • The Institute for Computer Music and Sound Technology (https://www.zhdk.ch/?id=65399) in Zรผrich is also worth mentioning. They are providing open source tools and max externals for ambisonics for years. They also have their lab-room equipped with more than 20 speakers. It’s not for public events, but if you like to use it for your project get in contact with them. They are very friendly and curious about new ideas.

    One tool is a basic audio sequencer for ambisonics (https://www.klangfreund.com/choreographer/ ). I have created its engine.

    • Ah, nice use of a timeline! ๐Ÿ™‚ Looks great; I’ll have a look.

      And yes, I’ve actually fiddled with those ZHDK tools.

      I suppose what I should make clear is that the open source side of this project is not new.

      At the same time, there’s clearly some threshold that has yet to be crossed as far as people widely creating and dispersing this kind of material. Now, I’m not certain yet what argument this project makes for changing that over other projects; this is something to follow up on…

      Of course, just talking about some of these other tools might also have an impact; it’s a topic I’ve been meaning to cover.

      • Thanks for your interest!

        The Choreographer is in beta state. OS X only. And not actively developed anymore. Please keep that in mind if you give it a try.

        Some hints to get started:
        – A movement (trajectory) is an own kind of item which can be applied to one or more audio regions.
        – Only mono files are supported. Drag and drop them from the Finder to the timeline.
        – Can’t draw your trajectory? Be sure the trajectory (and not the audio region) is selected. Command + click creates new points in the radar editor (If it is a breakpoint-trajectory).
        – You can group audio regions to move them together on the timeline.
        – You can export your creation as multi channel wave file.
        – Take a look at the ‘Engine -> Speaker Setup’ Dialog for routing. You can send pink noise to figure out which speaker is connected to which port of your audio-interface… (can happen with a lot a speakers)

  • Daniel Courville

    Peter, I’m raising my hand.

    • Sold — to the man in the headphones!

  • Daniel Courville

    Peter, I’m raising my hand.

    • Sold — to the man in the headphones!

  • Marco Donnarumma

    Peter, I agree that there’s a need for an open standard for ambisonics (and for the rest of configurations available), but this seems the less useful project in this sense that we have been presented in decades.

    There have been and there are presently several free and open source applications for ambisonics. Free and open source, FOR REAL. The most recent being http://www.mshparisnord.fr/hoalibrary/en/downloads/puredata/

    Now, this people filling their mouth with “open source” values while working with closed proprietary software just misses completely the point, at their best. At their worst, they’re exploiting a whole culture for ehm… making an Ableton library and building their own sound system.

    mmm, awesome. I just don’t see where’s the benefit for people. Maybe in buying Ableton or Max for Live to use their plugins? Or perhaps to pay the ticket to go to their events?

    • Well, look – making a system work with tools people already use is a good thing. So if you’re an Ableton Live user, having tools that integrate with the environment is great. Ditto, if you’re a Max for Live developer, then nothing against using that. And you can still put a Max patch under something like a permissive Creative Commons license – I would hope you would, if you want to share it.

      As I said, though, and here I’m not sure who you’re arguing with (as it’s not me) — I’d like to see open source tooling for things labeled “open source,” too.

      For instance, this should presumably work with HOALibrary, I’d hope? I’ll look into that.

      And yes, I don’t think this should be viewed as the only solution to this problem. But I’m not sure I can repeat that any more than I already did. ๐Ÿ˜‰

    • RTL

      Hi Marco, I’m one of the creators of ENVELOP. Thanks for your post, I appreciate the feedback.

      I would say there are two main problems we are trying to solve.

      1. There are few (if any) permanent public venues for the creation and performance of multichannel sound. There are a handful of universities that have dedicated spaces but for the most part use of these is limited to students and faculty. The physics infrastructure at these centers is usually not there to support outside artists having public events, or the economic drivers (I.e. running a bar) required to sustainably support these events.

      2. The vast majority of electronic artists are not programmers and lack the tools to create multichannel works. PD, Supercollider, etc. are great and users of these tools can of course plug in and perform on the system. But the reality is the majority of artists use tools like Live, and enabling them to compose without having to learn new software will enable a lot more of, and more variety of, multichannel works.

      I take issue with the notion that we are “exploiting” a culture. We very much see ourselves as part of the community of developers and scientists that have spent decades working on Ambisonics algorithms and tools. We are working closely with 2 of the foremost Ambisonics researchers, Eric Benjamin and Aaron Heller. As we leverage the awesome tools (many written in Max!) that others have created, so we too will contribute open-source tools back into the ecosystem. And hopefully, give some of these artists and creators a platform to share their work with the public (and make a few bucks doing it.)

      Roddy

      • Hi everyone, i’m also one of the creators of ENVELOP. i thought i’d chime in here with some thoughts / perspective / clarification.

        The ENVELOP system is designed to be flexible as far as how it interfaces with various production environments and workflows. Although we are focusing on Ambisonics, in particular standardizing around 3rd-order HOA (16 channels), it by no means is architecturally limited to Ambisonics. If a composer wanted to use VBAP, DBAP, or even discrete speaker feed-based panning, the system supports these various methods. We’ve spent quite a lot of time looking at performer use-cases, so we could anticipate the ways composers might want to take advantage of the spatialization that the system provides. A primary use case is simply a DJ or producer with stereo content. For this case, we plan to provide easy-to-use touch screed-based effects palates, so a performer with little knowledge of spatial audio can do interesting things with the system. Another use case is a performer with a bunch of outboard gear that would feed one of our audio interfaces. Via OSC or MIDI and a control surface, this performer could also set up a “3D’ mix. For these use cases, we’ll be able to record the performers “set” in full 3rd-order HOA and binaural.

        Similarly, as far as the production environment a performer chooses, it could be anything – PD, Supercollider, Max/MSP, Ardour, Reaper, Bidule, SoundScape Renderer. The composer can choose whatever spatialization tools / plugins he desires (ICST Max externals, HOALibrary, Spat, Blue Ripple, Wiggins, Courville, Wakefield, Digenis, etc…). The only requirement is that the he/she follows the standard 3rd-order Furse-Malham Ambisonics channel ordering and normalization (at least initially – our renderer may evolve to support other formats). And that he has a computer with USB 2.0, FIrewire, or Thunderbolt. For us, since we are heavily focused on EDM, we saw a need to enable composers to do 3D spatialization using Ableton Live, which is ubiquitous among the EDM / electronica community. What we plan to open source for Live users are all of the MaxForLive and Max/MSP tools we’ve been developing, along with Live templates to get producers started.

        What we hope is that as the project evolves, we’ll have an international community of collaborators (artists, developers, researchers) working with us to build new and interesting tools and effects. We fully realize it’s a grand vision that will require a lot of effort to realize. Our greatest hope is that this system will eventually generate a significant body of content (portable across other systems) that can make a dent in the longstanding “chicken-and-egg” content problem that has plagued Ambisonics for decades. Binaural transcode will be a key element here.

        SO – If you’re interested in getting involved, we’d love to hear from you!

  • Marco Donnarumma

    Peter, I agree that there’s a need for an open standard for ambisonics (and for the rest of configurations available), but this seems the less useful project in this sense that we have been presented in decades.

    There have been and there are presently several free and open source applications for ambisonics. Free and open source, FOR REAL. The most recent being http://www.mshparisnord.fr/hoalibrary/en/downloads/puredata/

    Now, this people filling their mouth with “open source” values while working with closed proprietary software just misses completely the point, at their best. At their worst, they’re exploiting a whole culture for ehm… making an Ableton library and building their own sound system.

    mmm, awesome. I just don’t see where’s the benefit for people. Maybe in buying Ableton or Max for Live to use their plugins? Or perhaps to pay the ticket to go to their events?

    • Well, look – making a system work with tools people already use is a good thing. So if you’re an Ableton Live user, having tools that integrate with the environment is great. Ditto, if you’re a Max for Live developer, then nothing against using that. And you can still put a Max patch under something like a permissive Creative Commons license – I would hope you would, if you want to share it.

      As I said, though, and here I’m not sure who you’re arguing with (as it’s not me) — I’d like to see open source tooling for things labeled “open source,” too.

      For instance, this should presumably work with HOALibrary, I’d hope? I’ll look into that.

      And yes, I don’t think this should be viewed as the only solution to this problem. But I’m not sure I can repeat that any more than I already did. ๐Ÿ˜‰

    • RTL

      Hi Marco, I’m one of the creators of ENVELOP. Thanks for your post, I appreciate the feedback.

      I would say there are two main problems we are trying to solve.

      1. There are few (if any) permanent public venues for the creation and performance of multichannel sound. There are a handful of universities that have dedicated spaces but for the most part use of these is limited to students and faculty. The physics infrastructure at these centers is usually not there to support outside artists having public events, or the economic drivers (I.e. running a bar) required to sustainably support these events.

      2. The vast majority of electronic artists are not programmers and lack the tools to create multichannel works. PD, Supercollider, etc. are great and users of these tools can of course plug in and perform on the system. But the reality is the majority of artists use tools like Live, and enabling them to compose without having to learn new software will enable a lot more of, and more variety of, multichannel works.

      I take issue with the notion that we are “exploiting” a culture. We very much see ourselves as part of the community of developers and scientists that have spent decades working on Ambisonics algorithms and tools. We are working closely with 2 of the foremost Ambisonics researchers, Eric Benjamin and Aaron Heller. As we leverage the awesome tools (many written in Max!) that others have created, so we too will contribute open-source tools back into the ecosystem. And hopefully, give some of these artists and creators a platform to share their work with the public (and make a few bucks doing it.)

      Roddy

      • Hi everyone, i’m also one of the creators of ENVELOP. i thought i’d chime in here with some thoughts / perspective / clarification.

        The ENVELOP system is designed to be flexible as far as how it interfaces with various production environments and workflows. Although we are focusing on Ambisonics, in particular standardizing around 3rd-order HOA (16 channels), it by no means is architecturally limited to Ambisonics. If a composer wanted to use VBAP, DBAP, or even discrete speaker feed-based panning, the system supports these various methods. We’ve spent quite a lot of time looking at performer use-cases, so we could anticipate the ways composers might want to take advantage of the spatialization that the system provides. A primary use case is simply a DJ or producer with stereo content. For this case, we plan to provide easy-to-use touch screen-based effects palates, so a performer with little knowledge of spatial audio can do interesting things with the system. Another use case is a performer with a bunch of outboard gear that would feed one of our audio interfaces. Via OSC or MIDI and a control surface, this performer could also set up a “3D’ mix. For these use cases, we’ll be able to record the performers “set” in full 3rd-order HOA and binaural.

        Similarly, as far as the production environment a performer chooses, it could be anything – PD, Supercollider, Max/MSP, Ardour, Reaper, Bidule, SoundScape Renderer. The composer can choose whatever spatialization tools / plugins he desires (ICST Max externals, HOALibrary, Spat, Blue Ripple, Wiggins, Courville, Wakefield, Digenis, etc…). The only requirement is that the he/she follows the standard 3rd-order Furse-Malham Ambisonics channel ordering and normalization (at least initially – our renderer may evolve to support other formats). And that he has a computer with USB 2.0, FIrewire, or Thunderbolt. For us, since we are heavily focused on EDM, we saw a need to enable composers to do 3D spatialization using Ableton Live, which is ubiquitous among the EDM / electronica community. What we plan to open source for Live users are all of the MaxForLive and Max/MSP tools we’ve been developing, along with Live templates to get producers started. (As you can see from the screenshots, we’re partial to Max/MSP and ICST Ambisonics Max externals ๐Ÿ™‚

        What we hope is that as the project evolves, we’ll have an international community of collaborators (artists, developers, researchers) working with us to build new and interesting tools and effects. We fully realize it’s a grand vision that will require a lot of effort to realize. Our greatest hope is that this system will eventually generate a significant body of content (portable across other systems) that can make a dent in the longstanding “chicken-and-egg” content problem that has plagued Ambisonics for decades. Binaural transcode will be a key element here.

        To us this project is not about making money and selling tickets to our venue – it’s a labor of love, driven by a team of individuals who are extremely passionate about spatial sound and want to see more of it out there in the world.

        SO – If you’re interested in getting involved, we’d love to hear from you!

  • Hector

    SuperCollider also has some very powerful tools for Ambisonics already built in, including the AmbisonicsToolkit Quark:http://www.ambisonictoolkit.net/…/Intro-to-the-ATK.html. I definitely agree that there is a need for performance venues for 3D music. However, from experience I can say that a space of the size they show in the picture probably wouldn’t work very convincingly for a crowd of more than about 10 people. The sweet spot issue can be a big problem with ambisonics (at least with first order ambisonics). Another interesting ambisonics option is the plugin suite made by Blue Ripple Sound. It’s not open source, but the core suite is free as in beer, and it does third order ambisonics, with a pretty nice graphic interface: http://www.blueripplesound.com/products/toa-core-vst. Also, one of the big advantages of ambisonics is precisely that you DON’T necessarily have to have access to a huge array of speakers while composing, as you can use a binaural decoder for monitoring when away from the performance space, which will work accurately enough for the initial stage.

    • roddy

      ENVELOP supports up to 3rd order, and since most sources will be virtually panned to 3rd order (not recorded and played back), the sweet spot issue should be mitigated. And Aaron Heller’s ambisonics to binaural transcoder lives right in the ENVELOP Max4Live toolkit by default, making home composition a breeze (as you correctly point out).

    • vanceg

      And, the problem of ‘sweet spot’ exists as much with vector, distance or angle based based panning. If one really wants to put an entire audience in the sweet spot of a surround rig, they are going to end up “wasting” a lot of floor space no matter what…. Gotta keep people away from the speakers.

  • Hector

    SuperCollider also has some very powerful tools for Ambisonics already built in, including the AmbisonicsToolkit Quark:http://www.ambisonictoolkit.net/…/Intro-to-the-ATK.html. I definitely agree that there is a need for performance venues for 3D music. However, from experience I can say that a space of the size they show in the picture probably wouldn’t work very convincingly for a crowd of more than about 10 people. The sweet spot issue can be a big problem with ambisonics (at least with first order ambisonics). Another interesting ambisonics option is the plugin suite made by Blue Ripple Sound. It’s not open source, but the core suite is free as in beer, and it does third order ambisonics, with a pretty nice graphic interface: http://www.blueripplesound.com/products/toa-core-vst. Also, one of the big advantages of ambisonics is precisely that you DON’T necessarily have to have access to a huge array of speakers while composing, as you can use a binaural decoder for monitoring when away from the performance space, which will work accurately enough for the initial stage.

    • roddy

      ENVELOP supports up to 3rd order, and since most sources will be virtually panned to 3rd order (not recorded and played back), the sweet spot issue should be mitigated. And Aaron Heller’s ambisonics to binaural transcoder lives right in the ENVELOP Max4Live toolkit by default, making home composition a breeze (as you correctly point out).

    • vanceg

      And, the problem of ‘sweet spot’ exists as much with vector, distance or angle based based panning. If one really wants to put an entire audience in the sweet spot of a surround rig, they are going to end up “wasting” a lot of floor space no matter what…. Gotta keep people away from the speakers.

  • poopoo

    Why is the built-in Multi channel and surround support in Ableton Live so very very bad? Robert Henke does surround sound gigs. Doesn’t it annoy him to?

    Anyway, aside from Max4Live, Live seems an odd choice for surround development. The stuff in Reaper is much much better.

    • vanceg

      Granted, Live doesn’t support surround sound natively. But, it is a good platform for realtime manipulation of control signals. And, Christopher, Roddy and that team seem to be making it work quite well.

  • poopoo

    Why is the built-in Multi channel and surround support in Ableton Live so very very bad? Robert Henke does surround sound gigs. Doesn’t it annoy him to?

    Anyway, aside from Max4Live, Live seems an odd choice for surround development. The stuff in Reaper is much much better.

    • vanceg

      Granted, Live doesn’t support surround sound natively. But, it is a good platform for realtime manipulation of control signals. And, Christopher, Roddy and that team seem to be making it work quite well.

  • Perry
  • Perry
  • ambi
  • ambi
  • Ambi

    Ambisonics is not the best for multichannel audio because even higher order finds it hard to get a sound to come close to the listener ie proximity (people always think ambisonics does that – but it doesnt) . Dbap is a much better option. The following sound installation uses Dbap :

    https://vimeo.com/88369367

  • Ambi

    Ambisonics is not the best for multichannel audio because even higher order finds it hard to get a sound to come close to the listener ie proximity (people always think ambisonics does that – but it doesnt) . Dbap is a much better option. The following sound installation uses Dbap :

    https://vimeo.com/88369367

  • wndfrm

    we need more projects like these, all quibbling aside. (and please put it aside!) .. here in portland oregon we have an event series called ‘SIX’, it’s a much simpler approach, but regardless in a similar spirit, to provide a space for artists to experiment with multi-channel audio, in a very casual, DIY aesthetic. even this very basic form proves to be quite inspiring!

    i find ENVELOP exciting, as someone who works with multi-channel audio at times, but is NOT a programmer. providing a higher-order system, with very hi-fidelity enclosures, integration with visual components, and an open approach will really light a few fires IMHO.

    i hope the funding goes through, and i hope ENVELOP inspires a lot of artists, and frankly, audiences, to examine the way they listen and perceive sound. of course these pathways are available through other, established avenues, and through self-motivated pursuits, but sometimes it’s the pathway from ‘check out this dope lineup’ to ‘hey, how does this work anyways?’ that has the most profound and instigative effect.

    • Travis Basso

      I’m Inspired already!

  • wndfrm

    we need more projects like these, all quibbling aside. (and please put it aside!) .. here in portland oregon we have an event series called ‘SIX’, it’s a much simpler approach, but regardless in a similar spirit, to provide a space for artists to experiment with multi-channel audio, in a very casual, DIY aesthetic. even this very basic form proves to be quite inspiring!

    i find ENVELOP exciting, as someone who works with multi-channel audio at times, but is NOT a programmer. providing a higher-order system, with very hi-fidelity enclosures, integration with visual components, and an open approach will really light a few fires IMHO.

    i hope the funding goes through, and i hope ENVELOP inspires a lot of artists, and frankly, audiences, to examine the way they listen and perceive sound. of course these pathways are available through other, established avenues, and through self-motivated pursuits, but sometimes it’s the pathway from ‘check out this dope lineup’ to ‘hey, how does this work anyways?’ that has the most profound and instigative effect.

    • Travis Basso

      I’m Inspired already!

  • foljs

    “””3D, spatialized sound is some part of the future of listening”””

    Or, as is more probably the case, it’s not.

    Except for high end blockbuster movies.

    Some reasons for my opinion:

    First, it’s not something really new. There have been “more than stereo” sound offerings for decades, including the 5.1 setups in houses. Heck, we can also count 4-track tapes in this too, and those were available in the seventies too. Plus those binaural recordings.

    Second, beyond stereo it’s mostly diminishing returns. Heck, most people don’t even care for stereo, they prefer to use headphones (which do not provide true stereo, speakers must be used for that).

    Third, this is too elaborate in mixing and equipment used (starting from the monitor setup for the studio), and the music industry is mostly in a downward spiral, average consumer spending wise, not the idea position to invest in more evolved schemes. For amateur artists that are not into sonics 100%? Forget it.

    And all this thing breaks apart as soon as you move to one side of the elaborate installation, etc, making it bad for places were the listener is not entirely confined (like movie houses): bars, clubs etc.

  • foljs

    “””3D, spatialized sound is some part of the future of listening”””

    Or, as is more probably the case, it’s not.

    Except for high end blockbuster movies.

    Some reasons for my opinion:

    First, it’s not something really new. There have been “more than stereo” sound offerings for decades, including the 5.1 setups in houses. Heck, we can also count 4-track tapes in this too, and those were available in the seventies too. Plus those binaural recordings.

    Second, beyond stereo it’s mostly diminishing returns. Heck, most people don’t even care for stereo, they prefer to use headphones (which do not provide true stereo, speakers must be used for that).

    Third, this is too elaborate in mixing and equipment used (starting from the monitor setup for the studio), and the music industry is mostly in a downward spiral, average consumer spending wise, not the idea position to invest in more evolved schemes. For amateur artists that are not into sonics 100%? Forget it.

    And all this thing breaks apart as soon as you move to one side of the elaborate installation, etc, making it bad for places were the listener is not entirely confined (like movie houses): bars, clubs etc.

  • Overloop

    Chris Willits on more or less drugs this year?

  • Overloop

    Chris Willits on more or less drugs this year?

  • PaulDavisTheFirst

    Its an oldie but a goodie:

    http://www.soundonsound.com/sos/Oct01/articles/surroundsound3.asp

    The rest of that series (this was just part 3 of 9 parts) on various “surround” sound technologies is equally awesome.

  • PaulDavisTheFirst

    Its an oldie but a goodie:

    http://www.soundonsound.com/sos/Oct01/articles/surroundsound3.asp

    The rest of that series (this was just part 3 of 9 parts) on various “surround” sound technologies is equally awesome.

  • Dub Gabriel

    I’m not sure what this is doing that Recombinant Media Labs wasn’t doing 10 years ago & Audium has been doing for the last 50. It isn’t the worlds first surround sound venue (also, just go to most modern movie theaters), What it sounds like for me, it’s the first venue built for Max for Live. I’ll save money on the Kickstarter and dust off my quad system.

  • Dub Gabriel

    I’m not sure what this is doing that Recombinant Media Labs wasn’t doing 10 years ago & Audium has been doing for the last 50. It isn’t the worlds first surround sound venue (also, just go to most modern movie theaters), What it sounds like to me, it’s the first venue built for Max for Live. I’ll save money on the Kickstarter and dust off my quad system.