We live in three-dimensional worlds, and physically we can move easily through 3D space. But mapping our intuitive sense of space, movement, and position to the computer is a massive challenge. Our interfaces are primarily two-dimensional, and even the three-dimensional interfaces may lack the kind of sense of space that comes naturally to a child.
Sebastian Pirch writes with a new project combining an array of open source tools to enable gestural modeling in 3D space. The heady brew of tools:
a simple mesh modeller that uses KINECTs depth perception and homemade data gloves for a more real world oriented user interaction in virtual 3d space.
realized only with open source software
PURE DATA, GEM, OpenNI, UBUNTU.
First try, but already pretty useable and a fun to play with.
We’ve seen Sebastian’s work before, bending free software visual programming environment Pd/GEM to new limits; see:
Powerful 3D Meets Visual Patching: Inside the Free GEM Engine for Pd
But this is a fascinating new take. What I find interesting, and perhaps most promising, is the combination of Kinect’s camera-based location with physical input. Physical input provides everything the camera system lacks – tactile feedback, precision, and the ability to separate an action (adding a node, for instance) from every other gesture you might make. Imagine, by contrast, if your car didn’t have wheels, a gear shift, a brake pedal, or an accelerator pedal. When you actually make an interactive action, you want some sort of object with which to engage, not only for its tactile feedback but because it adds clarity to your intention.
This is clearly worth doing as proof of concept; I do still wonder what the most practical means of 3D modeling and design may be, as we keep struggling with those inputs. I’d love to hear feedback, especially from people who do 3D modeling as a day job.
In the meantime, keep the great ideas coming.