2.5D – Polar Panoramic Video from Valentijn Kint on Vimeo.

Speaking of Adobe’s new Pixel Bender, VKNT writes about work he’s doing with polar views, as modified in an application that processes panoramic images called Pano2VR. It’s not a real-time process, in this case (though the finished result is built in Processing), but I don’t see why you couldn’t do some filter math in something like Pixel Bender or a conventional OpenGL filter that would do the same live. Could be fun stuff. VKNT writes:

This is part of my research on movement in space and new ways to represent this. It is essentially a series of 360° panoramas following a path through a space. These spherical images are converted to an angular projection, which introduces a typical distortion. The further you get away from the center, the more distortion. The center of the image is determined by 3 parameters: pan, tilt en roll. I animated these parameters resulting in a sense of movement and deformation of the space.
More info and downloadable .mov file at:
vknt.be/2008/09/25/25d-happy-new-ears-rez-08/

Ideas, dear readers?

  • I dont understand how you expect a GL filter (pixel shader or not) to produce data that isnt there. This is a question of field of view. You would need to shoot video with an incredibly wide angle to get those shots (as he has documented). Once you do that, sure, you can run polar coordinate GLSL shaders (or, whatever) on it just fine, but if you dont have the full field of view to begin with, it wont really have the same effect (not to say it wont be interesting..)

    Ive included some basic polar to cartesian and cartesian to polar glsl shaders in my v001 shaders, so i dont see why this cant be done in realtime.

    You could of course do something with render to texture however, in a purely openGL realm, render with a huge field of view, apply a shader, and get a similar effect of above with a realtime 3D scene. Could be fun.

  • Vade, that was his (and my) point … the ability to capture a wide field of view and do the processing in real-time, so that it could be an input, rather than separating photography and processing.

    But yes, now that you mention it, an all 3D-realm render-to-texture could be tasty and delicious.

  • Vade, the perfect way is indeed 3D model, but live video is also possible with the right equipment. http://www.ptgrey.com/products/ladybug2/index.asp
    The ladybug is very expensive, but a lot of people have build their own.
    sample panorama video: http://vimeo.com/1478602

    Peter, thanks for the link! A small note: In this case I only used processing to generate an .xml pano2vr project file. The actual conversion images were done in pano2vr. But real-time conversion is on the agenda 🙂

    I'm a bit late with my follow-up on possibilities, but I promise I'll write one during the weekend..

  • I've added another test to my post. Same technique but using an equirectangular projection, as is used in the music video of Cut Chemist.

  • quite off-topic, but I thought you guys might appreciate these amazing photos if you hadn't seen them… http://2photo.ru/2007/10/18/print:page,1,samye_si