It’s all real – in a manner of speaking. And it’s all real-time. But just what is a live performance made with cameras, gestures, and projection? It’s worth watching The V Motion Project and pondering those possibilities, amidst the flashy visual eye candy.
It’s certainly optically impressive. It’s music made to be watched (and, in the video, filmed with iPhones and whatnot). Watch a second time, and you wonder: as we reach a new peak of maturity, decades into alternative interface design, what will come next?
To say that this is a kind of special effect is not a criticism. Spectacle is part of the message here. Instead of tweaking knobs and controls, or, indeed, fingering frets that people can’t see on a guitar, full-body Kinect performance could accurately be described as a kind of futuristic circus act. You might wonder which came first – big-gesture computer vision tracking, or “dubstep” music that distorts sounds by pushing live effect parameters to their extremes.
What it isn’t is “fake” – that is, this isn’t just someone waving arms to a pre-produced track. Ableton Live provides the soundtrack. There are loads of hours of work poured into the project, and lots of pieces, but the essential tool for working with Kinect is from developer Ben Kuper.
That tool is itself an embodiment of the maturity of Kinect development, what really began as hacks and proof of concept. Not only is a big ad firm (BBDO, anyone?) hopping onboard here, but the sophistication of mapping the actual video messages has improved. Kinect creation is still very much a matter of getting intimate with development, if you want to master the relationship of gesture and sound. But that development doesn’t have to reinvent the wheel every single time.
And this collaboration is all about the artists. From the description, some of the team involved:
This project combines the collective talents of musicians, dancers, programmers, designers and animators to create an amazing visual instrument. Creating music through motion is at the heart of this creation and uses the power of the Kinect to capture movement and translate it into music which is performed live and projected on a huge wall.
We created and designed the live visual spectacle with a music video being produced from the results. We wanted it to be clear that the technology was real and actually being played live. The interface plays a key role in illustrating the idea of the instrument and we designed it to highlight the audio being controlled by the dancer. Design elements like real time tracking and samples being drawn on as they are played all add to authenticity of the performance. The visuals are all created live and the music video is essentially a real document of the night.
Check out the tech behind the project here:
Paul Sanderson – fugitive.co.nz
Agency: Colenso BBDO
That blog post goes into excruciatingly-fine detail (in a good way). But just a few bits will make sense to Live users and Kinect hackers alike in pretty short order. Here’s the Live set itself – the song is broken into a fair number of stems, clips, and effects:
And here’s the video I actually like better than the flashy promo at top. It shows not just one neat trick, but a suite of controller tools working in harmony, each with a novel mode of interaction and graphical representation.
The visuals are compelling, but I’m intrigued by the musical element precisely because it gets at the heart of the interaction.
As amazing as this looks, it also presents some challenges – or, if you like, some opportunities for future works. In order to pull off this big ensemble of controllers, the actual scheme for setting up the track becomes more rigid. For the brief here, that’s perfect: you want a human bobbing around in the midst of sci fi visuals acting out this particular song. But the price of that wonderful spectacle is the use of this as a more flexible instrument. That is, while we’ve traded in our boring knobs and faders and such, we wind up with something that’s potentially less interesting to actually play … even if it looks the business while we’re doing it. But I put that out there as a bit of a challenge, more than a criticism, and certainly many Kinect experimenters are playing with this technology to see if they can make something more playable. (As to odds of success, the jury is still out.)
Regardless, perhaps beyond the specific Kinect technology or even the style of music or interaction, what we’re seeing is a convergence of media. It’s performance that involves choreography, visuals, sound, and sensed gestural input.
It’s an impressive work. (Go, New Zealand!) It’s all-immersive, all-stops-pulled audiovisual immersion.
And pulling all those stops may be the surest way to really test what this stuff is about.
For a bit more gentle work, see The Human Equalizer, below. (And thanks to everyone who sent this in, but particularly Dave Astles and Fiber Festival’s Jarl Schulp.)
The Human Equalizer [ T.H.E. 1.0 ] is an interactive audio-visual installation.
T.H.E.1.0 is about the ability to generate your own audio experience. A virtual point-cloud of musical data is constructed out of digital matter, combined with physical exercise.
You can visualise the installation, the field of interaction, as a multitude of buttons.
You are the musician by your x y z coordinates being T.H.E. buttons, potentiometers ,and the strings of your ”air-guitar”.
The essence of the installation is virtual point-cloud of information: integers, floats, music notes, tones, waves, vibrations and light. Your body is a variable of values which you can connect to ‘T.H.E.’ system.
[ T.H.E.1.0 was nominated for the YOUNG-BLOOD-AWARD-2011 and was exhibited at the GOGBOT festival 2011 ]
An installation by Bram Snijders
Audio infrastructure by Tijs Ham