Body mapping and dance/visual fusions are still explored only in fits and starts, compared to the extent of live music and visual performance in other media. So, it’s encouraging to see this latest experiment from dancer Christian Mio Loclair. Working with Microsoft’s Kinect, the slowly-undulating tendrils of visuals behind him create visual counterpoint for headstands and hip-hop dance techniques. Far from running up against latency, here there’s a sense of visuals that answer the moves with a slow sigh, creating a kind of living architectural space behind him.
- ofxOpenNI and ofxCV work with OpenFrameworks to analyze imagery from the Kinect camera.
- MSAFluid for OpenFrameworks by our friend Memo Atken again provides visuals (actually, some of you probably already spotted it)
- ofxUI adds the UI.
- vvvv (“V4”) is used for calibration; see Christian’s blog post for more tips on making the most of calibration technique, and don’t miss the Elliot Woods tutorial that inspired him.
Christian muses to CDM, “I am very convinced that especially the Kinect and the upcoming Kinect 2 will change the way dance will be performed. I hope I can contribute to this development … just some streetdancers, hackers and a Kinect. I wanna see how far we can get by open source Code, own code and Open Dance.”