We’ve talked in the past about the idea of user interfaces and visual output merging. Instead of a UI on one screen and visuals on another, the idea is that the interface itself melds into the output. I can think of few better examples of how this begins to evolve than a video recently posted on Vimeo by user nucode. Working with a projected, camera-tracked multi-touch interface and audiovisual loops in custom Flash-based software, nucode manipulates samples as though on an alien, futuristic interface.
The result: a sequencer that has no timeline and seamlessly pulls content from online sources:
- Audiovisual loops, set as rotating circles/bubbles, palettes of sounds and visuals
- Sequence events together by attaching bubbles to one another – no timeline needed
- Gesture triggering of YouTube video search (make a gesture, get a video from YouTube)
- Simple real-time audio (low-pass filter, echo, and so on – sounds like there’s either some live synthesis or more sophisticated scrubbing going on, too)
- Runs in the browser on any OS, built with Flash and ActionScript 3
The camera/projection rig:
- IR emitter: IR laser and battery pack
- IR camera: A webcam shooting in infrared (the IR cut filter, which is in place for color accuracy with visible light, has been removed)
- Back projection: I think all of this is backprojected onto translucent film. What’s nice about that is you get the image behind him as well as in front, which looks very cool; see the second video below.
It’s good stuff. If you’re interested in this sort of multi-touch stuff, NUI Group is an indescribably awesome resource, both for seeing the kinds of work people are doing and for learning about some underlying technology.
But beyond just the question of cool multi-touch interfaces, I think this issue of the “invisible interface” (or exposed interface, depending on how you look at it) is one ripe for exploration. Previously on that topic:
I’m sure we can think of more examples. On my recent Processing-based rig, for instance, I ultimately decided just to do away with any sort of heads-up interface, choosing instead to just map parameters to a few controls and focus on the output. But there’s a wide range of possibilities – and for visualists and audiovisualists, the appeal of blending control and output is obvious, since what you see is such a big part of what it is we do.
Thanks to vade for the tip!