Generative visuals like these could take massive leaps forward in the near future, as enabling technologies clear the way for new techniques. Photo: Emi Maeda on harp and electronics, Lia on live generative visuals, (CC) by watz.

The VJ and live visualist of the future isn’t just about DJ metaphors and what happens in clubs. It’s about a convergence of new interface technologies for dealing with visual material in a more fluid, flexible way. It’ll change not only visual performance, but how we express ourselves in digital visuals, as well — something we’ve already seen happen with non-linear video editing and vector and bitmap graphics software, but taken further.

Vade points us to a couple of glimpses of technologies being researched now that will help enable these changes.

Manipulating Shapes

Takeo Igarashi, Tomer Moscovich, John F. Hughes, “As-Rigid-As-Possible Shape Manipulation”.

The description from the creators:

We present an interactive system that lets a user move and deform a two-dimensional shape without manually establishing a skeleton or freeform deformation (FFD) domain beforehand. The shape is represented by a triangle mesh and the user moves several vertices of the mesh as constrained handles. The system then computes the positions of the remaining free vertices by minimizing the distortion of each triangle. While physically based simulation or iterative refinement can also be used for this purpose, they tend to be slow. We present a two-step closed-form algorithm that achieves real-time interaction. The first step finds an appropriate rotation for each triangle and the second step adjusts its scale. The key idea is to use quadratic error metrics so that each minimization problem becomes a system of linear equations. After solving the simultaneous equations at the beginning of interaction, we can quickly find the positions of free vertices during interactive manipulation. Our approach successfully conveys a sense of rigidity of the shape, which is difficult in space-warp approaches. With a multiple-point input device, even beginners can easily move, rotate, and deform shapes at will.

Pretty cool stuff. There are already a limited number of VJs who work live with Flash animations and such, directly manipulating vectors, though they’re a minority and the results tend to be fairly restricted. But imagine if you could mess with vector shapes as easily as you can video. “Illustration” could become a live performance medium, and not just something people do over long hours in Illustrator and print on paper.

No offense to Adobe, but I think part of what has stunted this evolution is the reliance on big, traditional tools. The established conventions for how you work with elements like curves (including the wildly counterintuitive bezier curve) remain in place because artists are so universally reliant on a single tool from a single vendor. It’s so complicated to learn, in fact, that you spend all your time learning what’s there rather than building workflows around what you actually need to do. Start working with vectors in a tool like Processing, in which artists (not necessarily people who would call themselves “programmers”) have to build their own tools, and all of those conventions are up for grabs.

But change is likely to hit video, as well, not just vectors.

New Video Scratching: “Direct Manipulation”

Dynamic Graphics Project

Better-known research houses like Microsoft’s may be grabbing the headlines, but this Tokyo-based group is doing sophisticated research in all kinds of graphics manipulations. The key is integrating computer vision — the ability of the computer to “see” and analysis imagery and motion more in the way we’d expect — with other techniques. We take video with a simple timeline below for scrubbing for granted. But this metaphor is itself an invention — in fact, Joy Mountford, with whom I presented at South by Southwest in the spring, was on the team at Apple that refined a lot of these concepts.

The problem is, a timeline is pretty far abstracted from the way we see motion. Human perception can separate moving elements from a background, and sees the motion of objects, not just linear motion of a frame over time. The ingenious leap taken by the “direct manipulation” approach is to use computer vision techniques to allow us to “directly” move the objects within the frame, instead of just the whole frame via an independent metaphor. The impact on editing and viewing is already nice — but performance and VJing gets even more interesting.

Great ideas can come from fantasy — for an imaginary version of this, see a couple of old posts from Create Digital Music in which we saw giant DJs empowered with God-like control over the universe:

Giant DJs Continue to Play God with Universe; Scratching Reality Itself
Spin: Short Movie Makes a Turntablist God

What do you fantasize about as far as the future of visualism? Seen other technological research — or done some of your own — that could bring us into that future? Let us know in comments.