“Augmented dancing” is a phrase we’ve gradually been slipping in, describing projection mapping directly onto dancers. It’s always been easy enough to do, but now, if you want to simultaneously mask your projection so it doesn’t also spill behind the dancer – or even go nuts and generate your visuals from intelligent tracking – it’s easier than ever before. Thank/blame Kinect, and accessible tools for using it. MadMapper even has an experimental, very basic tool for creating the mask with that program, though for more sophisticated tools, you’ll want to look to open source libraries for software like OpenFrameworks and Processing.

Before getting into those technical details, let’s look at another example of what’s possible. Daniel Schwarz, whose work I’d planned to feature separately, has an interesting proof of concept in his piece AXIOM.3. For this work, he uses another great development tool: vvvv. As he describes the work:

AXIOM.3 is an interactive dance performance augmented with projection mapping. The whole performance is controlled with a self-built street organ by the audience.
The rotational speed of the street organ affects song tempo, dancing and visuals. The box on the left side visualizes the relative speed.
The projection mapping on the dancer is generated in realtime with a kinect camera.

Everything in realtime.
All done in vvvv with three computers, three projectors, kinect and ps3 camera.

Music [top]: Orquestra Popular De Paio Pires – Kyoto Melody

Music [second from top]: Timatim Fitfit – No How

Seen other dance work? We may shortly need to do a round-up, so send them in.

More on vvvv, which I simply pronounce “vvvvvvivvvvvvvvvvvvvv….. vvvvv” (I like to run it on my Eeeeeeeeeeeeeeeeeee PC):

  • this is very neat. i like the idea of mapping moving surfaces. Though i wonder, i've seen a few of dancers being projected onto and dancing in front of black backgrounds. couldn't you get away with simply shining a 'non mapped' projector on a dancer when using a very low reflectance black background? especially if your're filming it, since you could stop down the exposure. i suppose you could still see some faint images, but would be a low tech way of doing it. That said, i'm totally into the whole live mapping thing and want to pursue it further myself.

    also, i tend to call VVVV "4V" in an attempt to not sound silly. not sure if it works though : ). i asked a german speaking friend once how he would say it and he said "fow fow fow fow", so, there's another alternative!

  • Since you asked, Peter, here's the link to my recent project involving dance.

    We projected on a wall above the live dancers, due to space constraints. Still, the audience seemed to respond, and maybe part of the charm was that the dancers were not always aware of how the 3D grid was responding to their movement, or if I was spinning the grid around them (or not).

    (and p.s., my book is out! )

  • Hi there.

    There is some relevant resources on how to do this here: http://www.kimchiandchips.com/blog/?p=725

    I made a little thing for a dance presentation, I used a kinect, but ended up using only the IR camera feed and some IR lights.
    It ended up like this:

    http://vimeo.com/25224436 http://vimeo.com/30458715

    I hope you like them… and I hope i get to use the depth/point cloud feed to create more complex stuff soon 🙂
    This real time proyection is so amazing I want to try it!