A research presentation from SIGGRAPH has been spreading quickly online, demonstrating just how seamlessly we might soon transform two-dimensional photography into three-dimensional objects.
Perhaps what makes the “3-Sweep” technique so impressive is that it doesn’t represent some sort of computational breakthrough, or algorithmic trickery. It’s simply good, inventive design. And it allows human and machine to work hand in hand – rather than try to make the algorithm do everything, humans assist by evaluating where the objects are. And making the user interface tools intuitive for those humans also makes the results more accurate. (As I saw noted by researcher Jeff Kramer on a mailing list, among discussions of this piece, “humans do what they’re good at (judging perspective) and the computer does what it is good at (generating 3D models). “)
That kind of smart, sensible design, and the combination of human and machine techniques, I think are the best of the age in which we currently live.
Oh, the worst is probably YouTube comments, in which punters thought the whole thing was fake. (Hint: it’s not.)
But major kudos to this research team:
Tao Chen · Zhe Zhu · Ariel Shamir · Shi-Min Hu · Daniel Cohen-Or
And impressive as the beginning of the video is, keep watching to the end as it allows the assembly of an entire scene.