In Lit Tree, the design team Kimchi and Chips create dances of light across a potted tree, augmenting the plant life by transforming its leaves into voxels. 3D volumetric projection create clouds of light. In time, it could even impact growth – a kind of bonsai technique with light.

Given the work’s relevance both artistically and technically, I wanted to learn more about the way the project was conceived and constructed. Kimchi and Chips artist Elliot Woods, who collaborates with interaction designer and visual artist Mimi Son in Seoul, responded with yet more than I expected.

Behind the work itself is an evolving framework of free tools that could transform the way artists accomplish 3D projection and natural interaction.

First, the basic project description:

A small potted tree has a perpetual conversation with people. Through the use of video projection, a tree is augmented in a non-invasive way, enabling the presentation of volumetric light patterns using itʼs own leaves as voxels (3D pixels).

The tree invites viewers with a choreographed cloud of light that can respond visitors motion. As visitors approach, they can explore the immediate and cryptic nature of this reaction. The tree can form gestures in this way, and can in turn detect the gestures of its visitors. By applying a superficial layer of immediate interaction to the tree, can people better appreciate the long term invisible interaction that they share with it?

And just in case you doubt how much work this is, he shares a look at the code base. (Xcode 4 does make code look aesthetically pleasing, I must admit.)

Working in OpenFrameworks, the team built their own set of tools to realize the project, from computer vision to mapping and control to producing the final projection. To give an idea of the mechanism behind the patterns of light you see, Elliot shares with CDM a video of the voxel cloud generated by scattering just one projector onto the tree.

I’ll turn it over to Elliot to explain the full toolset, and how it was applied to this visual and interactive scenario from a design perspective. There’s some exciting work here: Kimchi and Chips have as their goal nothing less than building a fully open, multi-platform mapping software library.

The Seed for a Free, Multi-platform Mapping Framework

This Kinect-based demo shows the kind of power Kimchi and Chips are imagining — gestural control connected to sophisticated live mapping.

For maximum flexibility for artists, it’s not enough just to produce sophisticated 3D projections – you want to be able to control them. Here, another of Kimchi & Chips’ tools does just that, calibrating 3D mappings at the touch of a finger on an iPad.

MapTools-SL, MapTools

The software system ‘MapTools-SL’ (previously PCEncode) is a structured light scanning library/application built on openFrameworks. MapTools-SL is designed to become compatible with MapTools, which we hope to release this year as an open source / open standard / cross platform mapping software library.

The idea being that projects people build with MapTools can leverage an ecosystem of MapTools technologies that we’ve already developed; e.g.:

  • Padé 3D calibration (see video above, bottom)
  • Dedicated iPad mapping control application (see video above, bottom)
  • Kinect mapping (see video above, top)
  • Auto 3D edge blending

More details / videos to come out over the coming months about MapTools. We’re lagging a bit at the moment and really need to find commercial work to help pay for the development. Ed.: Cough. Hear that? Someone want to scare up some work and make this post go viral so we can, uh, get to play with these tools? -PK

openFrameworks
The GUI used is Kimchi and Chips’ ofxCVgui designed for non-runtime (offline) computer vision tasks.
For numerical analysis we created ofxPolyFit
For PS3Eye use we created ofxCLeye for Windows (but later moved to Logitech C910 discussed on our blog)

OpenFrameworks, Meet vvvv?

I asked specifically about the shared memory trick, which, in a separate demo, allowed visuals to be piped between OpenFrameworks and the Windows graphical media environment vvvv. We’ve been following Anton Marini’s terrific Syphon for the Mac, which has the ability to freely pipe textures between various 3D and video apps. Every time we mention it, jealous Windows and Linux users ask if something similar is possible on their platform of choice. My standard answer has been no, so I inquired about how this is working (as you can see in the video below.

ofxSharedMemory
This doesn’t really compare with Syphon, as you can’t move assets on the graphics card between OpenGL (openFrameworks) and VVVV (DirectX).
I haven’t used Syphon yet as I don’t use other tools on Mac OS X like Modul8 or Quartz, but am really interested in trying it out.
I created ofxSharedMemory for our project Link, but later reprogrammed all the recording / playback of videos directly as VVVV plugins; i.e. didn’t use ofxSharedMemory any more.

It’s definitely a work around, and isn’t advisable to depend on it as it can hammer your performance at high resolutions, but has also saved my life a couple of times.
It was also a gesture to try and bring the communities closer together as there aren’t many cases of crossover between those 2, and I really think both could benefit from the other (openFrameworks has incredible low level developers and cutting edge hacks, VVVV has strong experience with large scale media installations and already has elegant solutions to many problems in visual / physical computing).

The Projection: Making Light 3D, with Voxels!

Structured Light
Structured Light involves sending encoded messages out on a pixel of the projector. This could be for example morse code (we use interleaved Gray code binary; [Brooklyn-based media artist, teacher, and OpenFrameworks ninja] Kyle McDonald used a 3-phase sine wave method). If we can see this pixel (e.g. in a light sensor or a camera), then we can read back a message sent from a projector.

So if we have 2 cameras, each reading back the message from a projector pixel, then we know:
A) which projector pixel we’re looking at (this is encoded in the message)
B) where that pixel is in camera A
C) where that pixel is in camera B

Using B and C, we can calculate a real-world 3D position for that pixel identified by A.
We can re-imagine the pixel as a voxel, since it now has a known 3D position, and can be controlled by sending a signal to the projector’s pixel.

We do this for all the pixels in the projector simultaneously. We use the pixels which land on the tree and are correctly recognised.

Voxels
I created Litescape a few years back with some help from friends. This was based on Albert Hwang’s Wiremap (now called Lumarca)
These were experiments into projected voxel systems.

The lit tree at FutureEverything had about 350,000 working voxels from 2 projectors, but this could have been about 800,000 with an extra bit of code that pieces multiple scans together. This produces approximately the same resolution as a 100x100x100 display from 2 XGA projectors and a tree.

With voxels, we can build up 3D objects with real world scale and position. I generally refer to a technology capable of doing that as a ‘volumetric display’, though that term is now being actively used for Pepper’s Ghost systems. [Ed.: For the uninitiated, that refers to a classic theatrical illusion – the voxels really represent something quite different, if in the same illusory, multi-dimensional line of thinking! -PK]

If we can ‘create’ digital 3D graphics in the real world, then we don’t need to rely on any optic tricks to convince the viewer into believing they’re real (stereo, headtracking, multi-view, etc). They just simply are real, and have all the capabilities and restrictions of being real. The far goal with a voxel system is to create the free-space ‘hologram,’ which I believe is about 20-100 years off.

You can touch a pixel on a 2D screen, and you can touch a voxel on a 3D ‘screen’, but you can’t touch a pixel on a stereo screen, or a multi-view system. That kind of tangibility is really attractive to me. Its much more comfortable for the viewer as it’s spatially easy to understand. They know that if they walk around it, rotate their head, cover one eye, reach their hand into it, then it’ll still make sense. This is less distracting, but also feels like the 3D world of the computer really has become part of our day to day world. Viewers felt the effect was simple and beautiful, and didn’t generally feel the need to ‘pull apart the image’ or alter their own vision to see the voxel cloud.

Voxels intrinsically create a consistent scene from all viewing angles.

Interaction

For the purposes of this particular installation, it’s perhaps the most important component, and we’ve lost it in a bit in our (enjoyable) journey into the world of projection and mapping tools. But here are Elliot’s final thoughts, on the way in which the interaction works in the installation, and how it worked in practice:

Our intention was for visitors to feel out a natural interaction with the tree, so we chose to use scanning rather than tracking, so that nuances of movement such as moving a finger or flexing the hand, would result in a reflecting change in the shape of light in the tree, and allow the visitors to forget their physical hand and instead concentrate on the movement of the ‘light hand’ that was traversing the branches of the tree as a sort of phantom limb. This way they spent many minutes reaching into the tree and exploring their influence over it.

What’s Next

I’m very eager to trace the evolution of Kimchi and Chips’ toolset, as they could provide valuable ammunition to artists worldwide trying to push the medium of projection. (That is, they’re already doing an enormous service to the very evangelistic mission of this site.) To read up on more:

Archive for the ‘Structured light’ Category

And be sure to see CDM’s check-in back in January on this project:
Kinect-Augmented Reality, as Projection Mapping Meets Depth Sensing (Hint: It’s Awesome)

Thanks so much to Elliot for sharing, and for working to further these kinds of free tools and the art produced with them.

Do us a favor – if you, too, think it’s awesome, help us spread the word, and be sure to chime in on comments.