[vimeo width=”640″ height=”300″]http://vimeo.com/20111796[/vimeo]

Last week, guest writer Momo proposed a set of semantics and abstraction for making audiovisual collaboration more expressive – starting with ideas as fundamental as describing a kick beat. Now, he returns to show us what he actually means in his work, element by element. -Ed.

This is the current version of The Circus Of Lost Souls in its entirety. I’d like to break down each scene and element for you to give you some insight as to how it’s all built and controlled.

In its current form, the visuals for this song are basically self-running when we play live. I take a MIDI cable from their Audio setup and run the data through a local copy of Ableton Live in order to trigger the visuals on my end. I have failsafe options for every animation in this song – buttons and sliders I can use to control the animation if something goes haywire with our MIDI connection, though that has thankfully not happened yet during a show.

The Max4Live patches that I used for communication on this project are almost ready for some testing. Join the beta if you want to help!


Minimal Lights

This scene starts the show with some simple yellow orbs. I hear five triggers of this starting sound, and made the first one trigger /accent/1 and the following four /accent/2. I mapped these two inputs to functions called ‘trigger’ and ‘triggerMini’ within the scene.



Here we reveal where the orbs are coming from. This scene looks for any message addressed to /accent/ (so it hears both accent/1/ and accent/2/) and calls the ‘trigger’ function within the scene. If a scene only has one function, I often call it ‘trigger’, changing it later if I evolve the complexity.



This scene maps the /kick/ and /snare/ messages to ‘triggerTop’ and ‘triggerBottom’. The scene changes throughout are also triggered via MIDI. These could easily be left out to allow the visualist more control over which view we see. Since the /accent/, /kick/ and /snare/ messages are being sent throughout the entire song, you can switch to any scene and it will respond appropriately.



Like Sat1, this scene maps /accent/ messages to a ‘trigger’ function.



This scene is a bit of a cheat – its start is triggered via a /scene “chorus1” message, and then runs in time with the music. I’ve got 8 backup triggers that let me jump to the different lines in case things go pear-shaped.


First Dance

This was the first real tricky scene. I wanted to match the great oscillations of the synthesizer, but it was a Massive synth line, and there was no data I could find that matched the final outputted sound. My solution was to record and then clean-up a MIDI envelope that matched the warbling output of the sound, and then turn that into a stream of messages as /bass/, mapped to a ‘dance’ function in the scene that took a number between 0 and 1 to affect the animation. The envelope looks like this:


UFO Sing

This scene has two methods: ‘sing’, which takes a string and displays it in a speech bubble, and ‘unsing’, which hides the bubble away. These are triggered by the messages /lyrics “We are Bought and Sold” and /lyrics/clear.


Earth Sing

This scene works the same way as UFO Sing, with a legible font instead of a symbolic one.



This one sneaks in, since its default state looks identical to that of Earth Sing. When it detects that it’s live, however, it flies the UFO in, and then both entities respond to /lyrics and /lyrics/clear messages. Additionally, a trigger is mapped to a ‘beginTransfer’ function which is timed to the breakdown.


Earth Dance

A scene built from UFO Dance, with a different sprite in the foreground (the Earth). Everything else is the same – cheers for reuseable assets!


UFO Sing

This scene has two methods: ‘sing’, which takes a string and displays it in a speech bubble, and ‘unsing’, which hides the bubble away. These are triggered by the messages /lyrics “We are Bought and Sold” and /lyrics/clear.



No interaction here, just some scene-setting.



Reusing assets again, this scene has Duet embedded in a frame of people in a movie theater.



This is a two-part scene. It is not reactive until the ‘zoom’ function is triggered, at which point an envelope controls the ‘dance’ function of the eyes, using code evolved from the earlier dance segments.


Last Chorus

No interaction – though I have a ‘fadeOut’ function I trigger for narrative purposes.