This video, featuring research with Kinect RGB “scans” of three-dimensional space, starts out innocently enough. Microsoft’s depth-sensing camera gets pointed at the world, resulting in some very impressive 3D scans.

Then you get about halfway through. For my part, at least, I nearly fell out of my chair.

By constructing a 3D model out of the world it sees — even with wildly-shaking camera — the project is able to produce virtual 3D versions of things it sees. Add in textures, just after three minutes into the video, and things get really wild. With lighting, even more so. Then, with physics, throwing thousands of particles against the model can make virtual snow explode onto real-world scanned objects, or allow you to pick up a virtual teacup. It’s… you know what, just watch it.

If the team is out there, we’d love to talk to you. Here are the researchers identified. And yes, it’s really about time that Microsoft work on better utilizing pure research and bringing it to market. The potential is stunning. (Microsoft Cambridge has a fantastic track record, though so, too, do the other institutions involved here. Brilliant work by the individuals – great stuff.)

It’s beginning to look a lot like SIGGRAPH. (Anyone out there who wants to report back, get in touch!)

“KinectFusion: Real-Time Dynamic 3D Surface Reconstruction and Interaction”

Institutions:
Microsoft Research Cambridge
Imperial College London
Newcastle University
Lancaster University
University of Toronto

Researchers (some dual-affiliated):
Shahram Izadi
Richard Newcombe
David Kim
Otmar Hilliges
David Molyneaux
Pushmeet Kohli
Jamie Shotton
Steve Hodges
Dustin Freeman
Andrew Davison
Andrew Fitzgibbon

See also:
Engadget coverage
developerfusion.com Coverage
PDF from Microsoft Research

Thanks to Andrew Lovett-Barron and Jeremy Bailey on Facebook, through whom I found this one!