You might imagine sound in space, or dream up gestures that traverse unexplored sonic territory. But actually building it is another matter. Kinect – following a long line of computer vision applications and spatial sensors – lets movement and gestures produce sound. The challenge of such instruments has long been that learning to play them is tough without tactile feedback. Thereminists learn their instrument through a the extremely-precise sensing of their instrument and sonic feedback.
In AHNE (Audio-Haptic Navigation Environment), sonic feedback is essential, but so, too, is feel. Haptic vibration lets you know as you approach sounds — essential, as they’re invisible. The work of Finland-based DJ/VJ Matti Niinimäki, aka MÅNSTERI (“Mons-te-ri”), the project is part of research undertaken at SOPI Research Group at Media Lab Helsinki. Like some sort of sound sorcerer, the user is entirely dependent on movement, feel, and sound as they move unseen sound sources through space. (More technical details below.)
It’s labeled, as always, “proof of concept.” The creator promises more videos to come; we’ll be watching as this evolves, as it looks terribly promising.
Below, “Tension” is a fair bit simpler, in which users walk through a space and control synth parameters. (“You are the knob,” one might say, though I don’t suggest shouting that at someone you don’t know. They could take it the wrong way.)
This is a demonstration video of AHNE – Audio-Haptic Navigation Environment.
It is an audio-haptic user interface that allows the user to locate and manipulate sound objects in 3d space with the help of audio-haptic feedback.
The user is tracked with a Kinect sensor using the OpenNI framework and OSCeleton (github.com/Sensebloom/OSCeleton).
The user wears a glove that is embedded with sensors and a small vibration motor for the haptic feedback.
This is just the first proof-of-concept demo. More videos coming soon.
HEI Project 2011
SOPI Research Group
Aalto University School of Art and Design
A brief video showing Tension. An interactive spatial sound installation for multiple users.
A person enters the space and a generative sound is assigned to that person. The sound pans around in the 6-channel speaker system following the user in the space.
Up to 5 users can use the installation at the same time. Each person modifies the other sounds based on the distance to the other users. The closer you are to other people the more the tension in the sound increases.
Side note: watching these two videos makes me want to consult with someone on non-verbal expression, posture, and stage presence. That criticism is mounted at myself – I could use it. Perhaps we need an all-physical, unplugged music event for laptopists, controllerists, and electronic musicians. And I can at least say I’ve had some experience in this, working in the dance program at my undergraduate alma mater, Sarah Lawrence. Anyone game? (Sounds like something we could do while CDM is in Berlin in the fall.)
For their part, the Finnish research facility is working with dancers, along with Nokia Research Center. (Sadly, I can’t find documentation.) But I think interesting things happen when us non-dancers learn movement technique, too.