Jitter works brilliantly when it comes to processing signal – and that means for signal-like work with video and textures, it’s fantastic, as well as the usual Max-y tasks like processing input from physical sensors and input devices and the like. But try to do a whole lot of sophisticated 3D work, and Jitter may not be the best tool. For game-style 3D graphics and interaction, you want some standardized rendering and scene graph tools to take care of the hard work, plus physics and other capabilities that bring together your 3D scene.

That’s why [myu], the Max – Unity Interoperability Toolkit, looks so appealing. It not only allows for bi-directional data integration (via TCP) of Max and the Unity game engine, but can dynamically pass textures between the two. For those of you comfortable patching, say, chains of shader processors in Jitter, that means you can very quickly add some of the tasty 3D scene powers of Unity. Put together your textures in Jitter, and, say, dynamically process input from a Wii Fit balance board, then bring the input data and textures into Unity. (Unity is a friendly, elegant game engine built in C# and Novell’s open-source Mono implementation of Microsoft’s .net. Unity had previously been Mac-only but with a major new release now runs on Mac and Windows.)

The toolkit is the result of research at Virginia Tech Interactive Sound & Intermedia Studio director Dr. Ivica Ico Bukvic.

Needless to say, this could have powerful implications for all kinds of live and interactive installation applications. And yes, it is all released under the GPL.

[myu] Max-Unity Interoperability Toolkit v.1.0 Released [Cycling ’74 Forum]

More Max+Unity Game Engine Goodness, with Powerful Toolkit for Max, Jitter, Pd

Teaching Adaptive Music with Games: Unity + Max/MSP, Meet Space Invaders!

For other examples of combining Max and Unity – in this case for Max’s musical powers and Unity’s gaming prowess – see another story from today:

Teaching Adaptive Music with Games: Unity + Max/MSP, Meet Space Invaders! [Create Digital Music]

Updated: About those textures…

Ico follows up to answer our questions about how you might use textures with Jitter and the Unity Game Engine, via his [myu] toolkit:

You can definitely go [Max] GL->Unity. One way would be to take Max jit.gl.render object and render to texture (this is done rather efficiently on the GPU), then output texture into jit.matrix and relay jit.matrix info via myu. There is a redundant (and AFAIK currently unavoidable) step in the process that will introduce CPU overhead for offloading GL texture from the GPU into RAM but this is probably the most efficient way short of delving into a possibility of creating a shared memory directly on the GPU (which is something I am not even sure if it is possible–perhaps via shaders or CUDA, latter being less preferred way as it is hardware specific). This hypothetical GPU shared memory scenario would be in all likelihood limited to having both clients running on the same machine.

Another way to do it is to use jit.desktop object that can capture just about anything that is being
rendered onto the desktop screen (windows, widgets, GL windows, etc.), and then forward resulting jit.matrix content to Unity. The Unity->Max texture importing is
equally simple and does not even require myu per se. Simply use the same jit.desktop method and capture Unity scene and send it back to Max. Beyond this, we’ve not yet dealt with exporting dynamic textures from Unity into Max as IMHO I cannot think of anything that Unity can do to a texture that Max cannot.

Regarding texture communication protocol, believe it or not, myu transfers dynamic textures via TCP/IP packets without accounting for the ensuing latency (beyond ensuring that a frame is not displayed until all of its packets have arrived). Hence, this should not be used for time-critical operations (at least not in its current iteration) and is best suited for video streams and similar content. FWIW, based on our tests, myu can transport textures up to 1024×1024 with no observable lag (obviously using LAN), albeit at a considerable CPU overhead (due to heavy TCP/IP traffic). The video demo uses 256×256 and its CPU overhead at 25fps was negligible with both apps running on the same machine (G5 2×2.67 Xeon).

  • Im very very curious what the mechanism is for this. Looking at the patches, it seems like its streaming jitter matrices and not actual textures -> textures, thus not dealing with readback latencies. Its definitely cool, but I dont think you can go GL -> Unity, can you?

  • eric

    Maybe a bit unrelated, but does anybody here know of a way to get jitter matrices as textures or as images into Processing?

  • In reply to Vade's comments, you can definitely go Max GL->Unity, albeit it is a two-step process. You would render the GL scene to a texture inside max (using jit.gl.render–this should be rather efficient as it is all done on the GPU), and then offload the rendered texture from the GPU into RAM (this part is not very efficient, unfortunately, but still reasonably fast). At this point the rendered scene would become a jit.matrix that is easily transportable. An alternative way that actually is bi-directional is to use jit.desktop object that captures whatever is on the desktop and converts it into a jit.matrix. This way you could convert either a Max GL window or a Unity window into a jit.matrix for further processing.

    As for Eric's question, I am not aware of anything that does this but it should not be too hard to put together.

    Hope this helps!

    Best wishes,


  • Thanks Ico. I figured the solution was readback. You can probably use jit.gl.asyncread for best performance to go between Max Gl and Unity. I was curious if there was a GL to GL bridge. Im not aware of anyone who is doing that. God I wish there was a to share contexts between apps!