Elliot Woods writes with an extraordinary proof of concept: it couples the depth-sensing capabilities of Microsoft’s Kinect with projection mapping to effectively “scan” a 3D scene. It’s almost Holodeck good, from the looks of the potential here.

Kinect hack + projection mapping = augmented reality +hadoukens): Using the kinect camera, we scan a 3D scene in realtime. Using a video projector, we project onto a 3D scene in realtime.By combining these, we can reproject onto geometry to directly overlay image data onto our surroundings which is contextual to their shape and position.

As seen in the video, we can create a virtual light source which casts light onto the surrounding surfaces as a real light source would.

At Kimchi and Chips we are developing new tools so we can create new experiences today. We share these new techniques and tools through open source code, installations and workshops.

More on the Kimchi and Chips blog

You keep sharing, guys. Looks utterly brilliant. I hope to keep tabs on this project in particular. Also, great name.

Updated: The duo is Kimchi and Chips as it’s a girl + guy, Seoul + Manchester team, Mimi and Elliot. This makes me no less interested in trying the combination as actual food. I’m on it.

This video nicely shows some of the process of making this work: