Turning music and sound into three-dimensional worlds often yields something that fields like a trip through space. But this feels like a real trip. Through pulsing, glowing starfields, “Versum”‘s audiovisual movements are brain-bendingly transformative. Artist Tarik Barri has created an integrated world of sound and image that makes the interface and the compositional realms seamless. It seems as though this really is a musical universe, through whose harmonies of the spheres you can fly like. Boldly going, indeed.
Ingredients: Max/MSP/Jitter, Processing, Java, SuperCollider, GLSL [the 3D shading language], and … some serious skill and time, I imagine.
The work has been in development for some years (not surprisingly, given the results). But it surfaced again as we brought up the 3Dconnexion SpaceNavigator hardware as a practical controller for 3D. See Create Digital Motion:
Look at Me, I’m Flying: SpaceNavigator Hardware + Blender
Tarik’s work resurfaced after a presentation in the UK. Reader janklug writes:
I’m just back from the M4_u Max/MSP/Jitter conference in Leicester (was great, btw), where Tarik Barri presented his project ‘Versum’, both as an installation and as a performance.
The user (and in case of the performance, Tarik) navigates through this incredible 3D-space-sequencer-universum with the help of a SpaceNavigator; glowing objects floating in this space produce sound, and as you approach them, they even give this nice doppler effect…
It was totally amazing to be able to float between pulsing rhythm-planet-objects and shiny drone-beams; navigation was easy and natural. Tarik uses a combination of Processing and Max/MSP; don’t know which one the SpaceNavigator is connected to.
Having tried this, I immediately ordered one; I think it also could be a great interface for M4L…
Significantly, it’s really the act of flying that controls the music. That remains interactive, but it’s the movement through the three-dimensional space that determines what you hear. As the artist explains:
This virtual world is seen and heard from the viewpoint of a moving virtual camera with virtual microphones attached. This camera, controlled in realtime by means of a joystick (or any other kind of controller) moves through space, similar to how first person shooter games work. Within this space, I place objects that can be both seen and heard, and like in reality, the closer the camera is to them, the louder you hear them. So when the camera moves past several visual objects, you simultaneously hear several sounds fading in and out. Consequently, the way the camera travels past them actually causes melodies and compositional structures to be seen and heard.
The visual position of each object coincides with the panning of its sound: objects to the right of the camera will also be heard on the right, and those behind the camera will be heard from behind in case a surround speaker setup is used. This principle also applies to the Z-axis, meaning that sounds can be heard coming from above and below if the speaker setup supports it.
That’s the essential question, to me, when looking at 3D environments for music. What about the dimensionality will interact with the music? Is it something spatial, or will there be other sorts of interactions? (New Zealander-turned-Berliner Julian Oliver worked extensively with game engines, for instance. One solution for him was modifying the “gun” in those games to be an implement for doing things in the space, turning swords into plowshares after a fact by making the gun produce music rather than kill virtual entities.)
So, now you’ve seen some of the technical demonstration. But Tarik uses his work as an environment in which to make audiovisual performances. Here’s what some actual live playing looks like, in a beautiful, meditative piece called “Eleven”:
In fact, the biggest challenge to me of a piece this awesome is that you want an immersive environment, not just the small, rectangular screens that are often all festivals and venues can afford.