Compare the complex model of what a computer can use to control sound and musical pattern in real-time to the visualization. You see knobs, you see faders that resemble mixers, you see grids, you see – bizarrely – representations of old piano rolls. The accumulated ephemera of old hardware, while useful, can be quickly overwhelmed by a complex musical creation, or visually can fail to show the musical ideas that form a larger piece. You can employ notation, derived originally from instructions for plainsong chant and scrawled for individual musicians – and quickly discover how inadequate it is for the language of sound shaping in the computer.

Or, you can enter a wild, three-dimensional world of exploded geometries, navigated with hand gestures.

Welcome to the sci fi-made-real universe of Portland-based Christian Bannister’s subcycle. Combining sophisticated, beautiful visualizations, elegant mode shifts that move from timbre to musical pattern, and two-dimensional and three-dimensional interactions, it’s a complete visualization and interface for live re-composition. A hand gesture can step from one musical section to another, or copy a pattern. Some familiar idioms are here: the grid of notes, a la piano roll, and the light-up array of buttons of the monome. But other ideas are exploded into spatial geometry, so that you can fly through a sound or make a sweeping rectangle or circle represent a filter.

Ingredients, coupling free and open source software with familiar, musician-friendly tools:

Another terrific video, which gets into generating a pattern:

Now, I could say more, but perhaps it’s best to watch the videos. Normally, when you see a demo video with 10 or 11 minutes on the timeline, you might tune out. Here, I predict you’ll be too busy trying to get your jaw off the floor to skip ahead in the timeline.

At the same time, to me this kind of visualization of music opens a very, very wide door to new audiovisual exploration. Christian’s eye-popping work is the result of countless decisions – which visualization to use, which sound to use, which interaction to devise, which combination of interfaces, of instruments – and, most importantly, what kind of music. Any one of those decisions represents a branch that could lead elsewhere. If I’m right – and I dearly hope I am – we’re seeing the first future echoes of a vast, expanding audiovisual universe yet unseen.

Previously:
Subcycle: Multitouch Sound Crunching with Gestures, 3D Waveforms

And lots more info on the blog for the project:
http://www.subcycle.org/