After years of frustration with computer vision on general-purpose computers, the upcoming second-generation Kinect sensor really does begin to look like a breakthrough. And that breakthrough happens inside the hardware design, a System on a Chip that yields high performance data transfers that simply aren’t possible on the laptop in front of you.
The site SemiAccurate has taken it upon themselves to look at those particulars. It’s worth going back and reading their whole series on the hardware, actually, even before they get into how vision works on the platform, if you’re fascinated by such things.
But their latest article starts to talk about what should interest anyone working in video or interaction. And this is where the new Kinect is a massive leap forward from the last one. It’s all about data: faster data transfer yielding lower latency, and more precise and accurate sensing yielding more reliable performance.
The Kinect simply sees better, and faster – like Mr. Magoo adding glasses and then turning into the Bionic Man, all at once. Author Charlie Demerjian covers, in depth, why the sensor is quite literally deeper.
Sentences like this should get your pulse racing, vision fans:
Because of the high modulation rate you can get multiple shots in different lighting per pixel sensor per frame, 30FPS is painfully slow compared to tens of MHz modulation rates.
More (with links to the hardware series, and an upcoming story concluding how they’ve solved vision problems):
A long look at Microsoft’s XBox One Kinect sensor [SemiAccurate]
Those Kinect developer pre-release units are coming, too. Agreements just arrived this week to developers, with hardware still promised to ship in November. Happy Thanksgiving slash early Christmas.
What’s most exciting about this is what we’re likely to see. After all, even with the relatively-primitive first Kinect (at least in comparison to the low-latency performance and accuracy of the new one), we saw a steady stream of projects like – well, like touchscreen bathtubs.