“Natural interaction” is the phrase commonly applied to gestural interfaces. But a gesture is only “natural” once you’ve learned it. And as everyone from interaction designers to game makers have discovered, that can leave users confused about just what gesture they’re supposed to make. (Ironically, the maligned conventional game controller doesn’t suffer so much from this problem, as its buttons and joysticks and directional pads all constrain movement to a limited gestural vocabulary.)

A team of researchers has just shared a new approach to the problem. Coming from Microsoft Research and the University of Illinois’ Computer Science department, authors Rajinder Sodhi, Hrvoje Benko, and Andrew D. Wilson use Kinect not only to process gestures, but teach them. Here’s their description:

LightGuide is a system that explores a new approach to gesture guidance where we project guidance hints directly on a user’s body. These projected hints guide the user in completing the desired motion with their body part which is particularly useful for performing movements that require accuracy and proper technique, such as during exercise or physical therapy. Our proof-of-concept implementation consists of a single low-cost depth camera and projector and we present four novel interaction techniques that are focused on guiding a user’s hand in mid-air. Our visualizations are designed to incorporate both feedback and feedforward cues to help guide users through a range of movements. We quantify the performance of LightGuide in a user study comparing each of our on-body visualizations to hand animation videos on a computer display in both time and accuracy. Exceeding our expectations, participants performed movements with an average error of 21.6mm, nearly 85% more accurately than when guided by video.

The complete research paper is published online:

Thanks to co-author Rajinder Sodhi for the tip!