Stop-motion animation, made stunningly simple … through science!

Our video-based interface enables easy creation of stop motion animations with direct manipulation by hands, which are semiautomatically removed through a novel two-phase keyframe-based capturing and processing workflow. Our tool is complementary to, and can be used together with traditional stop motion production (e.g., for the rotation of individual faces here).

Translation: you can hold stuff with your hands, move it around normally, and then make it look as if you spent countless hours doing actual stop motion animation. Insane.

The team in Hong Kong who worked this magic: Xiaoguang Han and Hongbo Fu and Hanlin Zheng and Ligang Liu and Jue Wang.

A Video-based Interface for Hand-Driven Stop Motion Animation Production [Paper/project page, The City University of Hong Kong]

To be published, this year’s volume of IEEE Computer Graphics and Applications.

Via Tom Phillipson; thanks, Tom!

  • Paul M.

    When can I tell my child the software will be available?!?!

  • John

    i think you underestimated the amount of work this still requires in your “Translation”. it doesn’t magically key out your hands after taking a series of stills. you then have to go through the animation sequence/action again, and match every single movement of the same object with your hands/fingers resting in a different place on that object, while double-checking its position against a frame capture from the original animation. it IS pretty cool and very useful for some situations, but it is also more work than your comments suggest (or maybe just more work than i was hoping for when i gave the story a first once-over?)

  • Brett Jones

    There was a “Kinect” version of this idea previously (cited in this work). The contribution of Xiaoguang et al. is a 2D video based approach that uses the original video of the physical objects (as opposed to a virtual representation).

    More details:

  • massta

    John is right, you need to animate twice using this method. For some situations this is a really cool option. Other situations, not so much. I like this idea: A wired model that captures all the keyframes digitally based on joint movement (link: )