It’s a new world for media artists, one in which we look to the latest game console news because it impacts our art-making tools.
And so it is that, along with a new Xbox, Microsoft has a new Kinect.
The new Kinect uses standard infrared tracking (ideal for in-the-dark footage and accurate tracking), but also returns RGB imagery. It’s 1080p, 30-60 fps (it seems tracking is at 30 fps and video at 60, but I’m reading conflicting reports). Hands-on reports say latency is reduced. If the finished product is consistent with rumors, that could be owing to more in-hardware tracking analysis; once you get to trying to do the analysis on the computer (or console), you encounter additional bottlenecks. Now, musical readers have much greater expectations of low latency than gamers, though, so it’ll be interesting to see this in practice.
The big news is tracking that gets closer to your body, breaking analysis into smaller bits. Wired, granted exclusive early access, goes into some detail about the way the tracking tech has changed. Instead of a straight depth map created by producing a 3D picture of two separate infrared-based camera images, the new tech uses “modulated” IR light. Given that this is new technology, I’m not yet clear on the specifics of that, and would love some reader feedback. (Ahem.)
The original sensor mapped people in a room using “structured light”: It would send out infrared light, then measure deformities in the room’s surfaces to generate a 3-D depth map. However, that depth map was lo-res to the degree that clothing and couch cushions were often indistinguishable. The new model sends out a modulated beam of infrared light, then measures the time it takes for each photon to return. It’s called time-of-flight technology, and it’s essentially like turning each pixel of the custom-designed CMOS sensor into a radar gun, which allows for unprecedented responsiveness—even in a completely dark room.
Xbox One Revealed [Wired.com]
Say what? Well, the basic idea is that, by using a modulated beam of light, you can determine the depth of an object by measuring the phase shift between the emitted and received light. In fact, this is very similar to the way a single IR or (with sound) ultrasonic sensor works, only using a pixel array instead of just one emitter. You can read a paper on the subject, or follow a forum discussion on the B3D board. (Thanks to Sam Tuke for posting this. Now – specifics, still, could be interesting.)
The upshot to all of this is better tracking:
- More discrete people can be tracked independently, without having to add more Kinects (as some hackers did) – up to six, says Microsoft. And that includes tracking people if they cross one another – a major breakthrough.
- It’s easier to distinguish between people and objects (like your couch).
- Individual gestures can be tracked – facial gestures, or finger-by-finger tracking (as touted by other systems like Leap).
What’s missing so far: any word of how hackable the new system will be. The last time, it took hackers to get access to camera images and tracking data, even as Microsoft themselves lagged in providing an SDK for Windows. I’d like to see more openness this time, especially given how much of the hype about Kinect has been generated by hackers – and knowing that Microsoft would like more inventive independent game design (or even art) with their tool on the Xbox platform.
MIT Technology Review is evidently waiting, too:
That article is largely speculative, as is mine. But I can tell you, even if you aren’t planning to use individual finger gestures and the like, anything that provides more precise tracking or reduces latency will help applications and art, generally.
And that makes this look very good indeed. Here’s a hands-on video from The Verge, for a quickie:
WIRED goes into more detail:
We’ll be watching – particularly on the hacker info. If you know anything about the development picture or can make sense of this modulated IR business, please do get in touch!