Technology still has the power to appear like magic. And one place we may desperately need magic: straightening out our horribly shaky, handheld video shots. Software makers like Apple have already offered up some techniques for doing this – in the case of Apple’s Final Cut Studio, optical flow analysis attempts to track the image as it shakes around the screen and compensates by adjusting the orientation of the frame. But a research team at the University of Wisconsin, partnering with Adobe, will present a new approach at the legendary graphics-geeky SIGGRAPH conference in August. They go one step further, applying a 3D mesh to the image to warp your image three-dimensionally to make the stabilization even more seamless.

Me writing about it is basically useless. Check out the mind-blowing results in the video. From the description:

In this paper, we describe a technique that transforms a video from a hand-held video camera so that it appears as if it were taken with a directed camera motion. Our method can adjust the video to appear as if it were taken from nearby viewpoints, allowing for 3D camera movements to be simulated. By aiming only for perceptual plausibility, rather than accurate reconstruction, we are able to develop algorithms that can effectively recreate dynamic scenes from a single source video. Our technique first recovers the original 3D camera motion and a sparse set of 3D, static scene points using an off-the-shelf structure-from-motion system. Then, a desired camera path is computed either automatically (e.g., by fitting a linear or quadratic path) or interactively. Finally, our technique performs a least-squares optimization that computes a spatially-varying warp from each input video frame into an output frame. The warp is computed to both follow the sparse displacements suggested by the recovered 3D structure, and avoid deforming the content in the video frame. Our experiments on stabilizing challenging videos of dynamic scenes demonstrate the effectiveness of our technique.

The research, at the University of Wisconsin-Madison:
Content-Preserving Warps for 3D Video Stabilization

You can view all the techie details there, as well as many more demo videos. This is promising stuff, and we’ve seen in recent years a vast acceleration of the time between academic research and shipping commercial products — especially with cheap computational power on home computers to play around with, and increasing challenges for software vendors to differentiate what they’re doing in a mature application space.

Side note: boy, do I want to go to SIGGRAPH this year.

Also along these lines: Spacetime Fusion, tests of Final Cut’s SmootCam feature, more SmoothCam tests

For those of you purists, yes, it’s still worth considering the art of steadicam shots – at least before technology obliterates it for us clueless masses. Previously: B&H Interviews Steadicam Inventor: Shooting is Like Dancing

  • Michael Una

    Wow, it's like Autotune for motion.

  • ha ha ha ha , autotune for motion.

    yes is great , i cant wait to buy me a camcorder.

    is this proyect open source or it will be sold to adobe so that they can make a fortune out of this ?

  • massta

    Yes sell out to some company please!
    We want it now and we want it cheap!

  • Bill

    Very nice effect, and it seems relatively simple as well. I noticed that all their outputs are cropped – I suppose this is because the edges of the frame warp with the inner contents. It's interesting that this comes about at the same time consumers are gaining the resolution needed to crop convincingly.

  • This is absolutely remarkable.

    Being a very, very amateur, no budget filmmaker, I've spent much time building or ordering lower cost camera stabilizers and sending them back as they are nearly useless.

    Current stabilization software has too many bizarre artifacts for me to want to use it on anything I shoot, and this seems largely lack most of those.

    The only frustrating thing, is the crop factor that it introduces is really quite significant, and renders it close to useless for stuff that was shot with the framing and field of view in mind beforehand.

    If they develop some sort of natural looking edge reconstruction, something a little bit more than edge mirroring, I would love to use this on something I shot.

    Sadly though, I am worried they will take this technology and sell it at a pricepoint totally out of the reach of us teeny tiny little guys, despite showing it's application on home video.

    make this free / open source and free / cheap for itty bitty filmmakin' youngin's.

  • That is the best thing I've seen all week.

    @humblesound and @bill: I would suggest that the crop factor will depend on how shakey the footage is to begin with. Some of those examples are pretty radically cropped, but they're radically shakey too.

    If your source material was shot as smoothly as possible, with the expectation that there will also be some final cleanup with this tool, I think the results would be truly remarkable. Their examples seem to be geared towards real "amateur" footage. Someone with a modicum of camera control skill can already get much better results without correction, so the correction would make it even better.

  • Ooh, and upon a second watch there is some pretty major warping on the Parabolic path clip that starts around 3 minutes in.

    Keeping an eye on the left of the frame, the horizon and railing morph like something out of Fear and Loathing in Las Vegas.

  • Yeah, I caught that too, but it did take a second watching to see it properly.

    Like any post-production technique, if the source material isn't well shot to begin with, it can't be as effective. With good source though, this is exciting my brain in many ways.

  • Pieter

    The "autotune for motion" comment seems particularly apt; the output is better than the source, but starts to feel "off" just as soon as you put it to scrutiny. Also like autotune, there will undoubtedly be a reasonably-sized group of purists who refuse to have anything to do with it.

    For the home enthusiast, I can see this being a great way to make school projects and family moments more professional-looking and easier to watch. Anybody even semi-professional will almost always want to have a steadycam, though.

  • @Pieter: Quite a good analogy. However I don't think this could quite be abused in the same way as Autotune is. Autotune can start out with perfectly-sung material and make it sound Autotuned. With this kind of tech, if you start out with perfectly shot material, it shouldn't really do anything!

    Like any "auto" post production effect, in audio or video, it can do terrible things if used poorly. Auto-levels on well exposed footage shouldn't do much. It can be useful to tweak things for you, and it can look totally terrible if the source material isn't right.

    If someone said to me "I'm a purist, I don't use auto-levels", I'd think they were a bit of a twit.

    Of course, if they applied auto-levels to everything they ever did, whether it needed it or not, I'd think they were much more of a twit.

  • ian

    maybe it'd be possible to get around the cropping issue by using a wide-angle lens? maybe just wide enough to get that extra space on the sides you'd need for the cropping

  • Jim

    This technology is already commercially available from these companies right NOW:


    Boujou 3D-

  • cat

    I've abused twixtor to get artifacting, I'm sure you could do the same here, deliberately shake the camera violently and correct it to get fucked up backgrounds with nice steady subject, all tools are there to be abused! Looks brilliant!

  • Rich
  • Pingback: Interesting Reading #308 – The Blogs at HowStuffWorks()

  • JR

    You talk about "perceptual plausibility" and using scene plots to determine space already in the scene.. but that’s only if the it's already in shot! OK if you are moving forward or back in space

    BUT what about if the camera is moving along a vertical axis? i.e. panning down the side of a building? How does the software plot for objects outside the frame? Surely there is less time for the software to track if the camera is behaving in this way?

  • Pingback: Estabilizaci?n de v?deo()

  • there's a number of 3d stabilizers already on the market (such as 2d3's steadymove) but I've never seen results like this – beautiful fluid movement, very consistent.
    I've dealt with horrible handheld shots and I've tried to make a video with it (no-budget), but I ended up having to reintroduce some of the shaky motion – why? Motion blur. You can stabilize video all you want but you'll still have leftover motion blur in your shots, which ends up looking really weird – a bit like that shot in fight club where brad pitt talks to the camera,… but less interesting.

    These guys don't talk about removing the motion blur in the shots, which, in theory, should be doable (you have loads of stuff to work with, you know the length of the smear, previous and following sharp frames, I just know these guys can cook this up :D)

    The video is very, very, VERY impressive though.

  • Jean-Marc Goguen

    First of all, I agree with the suggestions that the amount of shaking will determine the amount of cropping. If a tree on the right suddenly disappears for a couple of frames due to shaking, the program will probably crop it out instead of trying to make a calculated guess as to what that tree was supposed to look like.

    Finally, there has been some talk about pricing in these comments and I'd like to throw my two cents in. Initially, my instincts told me that this would be highly overpriced seeing that it is a specialty product. However, if it is overly expensive, it might only be accessible to industry professionals, the same people who can afford professional dollies and specialty gear.

    On the other hand, If they were to hit the premium consumer price point instead, sales volume would go up, possibly creating more profit on the bottom line.

  • Pingback: Stabilize video in Ubuntu Linux | 314bits()