As an addendum to the last story, Ivica Ico Bukvic sends along an example of the [myu] Max/MSP + Unity game engine combination in action. Here’s the surprise: Unity isn’t generating visuals. Instead, Unity simulates ripples created by movement in the space, and builds physical models that are sonified and spatialized by Max/MSP.

Speaking of work involving art museums and the combination of Max and Unity, VJ Anomolee notes in comments his own work with the pairing. Lightbent Synth is an in-progress piece with alternative controllers and sensors that produces sound with a novel visual representation (sound’s very quiet in this preview — more hopefully once it progresses):


Lightbent Synth from VJ Anomolee on Vimeo.

Ivica explains the top work:

This past fall [myu] had seen its first real-world implementation in an exhibit that was a part of the grand opening of the Taubman Museum of Art in Roanoke, VA (http://www.taubmanmuseum.org/). The exhibit utilized [myu] as part of an interactive aural installation titled "elemental." An online tech
demo video of the installation, including written synopsis is available also via Youtube at http://www.youtube.com/watch?v=PA-9BOgc1gk. Below is a brief synopsis of the installation:

"elemental" interactive communal soundscape premiered in November 2008 as part of the Revo:oveR collection commissioned for the grand opening of the Taubman Museum of Art in Roanoke, VA. The Youtube video focuses primarily on the technical aspects of the installation. Using Max/MSP/Jitter, a homebrew IR webcam with fish eye lens and a LED-based IR spotlights, entire 24×36-foot exhibit space is converted into an aural sandbox giving visitors an opportunity to generate and shape the
ensuing soundscape. Positional data of up to 20 visitors is forwarded to Unity3d using [myu] Max-Unity interoperability toolkit developed at DISIS (http://disis.music.vt.edu). Unity is used for physical simulation of ensuing ripples and the resulting data is sent back to Max for spatialization across a 12-channel (4×3) ceiling-mounted speaker array. Driven by communal interaction, virtual ripples refract from each other spawning an algorithmically generated aural fireworks. The exhibit ran non-stop for approximately 5 months until March 2009.

Bonus video below: an early prototype that did include visuals. After days of looking at emulated knobs and faders, it certainly does speak to some of the possibilities for musical interface and expression.