Datamosh? (The “forbidden” but harmlessly meaningless word?) Video squishification? Mushy data?
Call it what you will, but applying real-time distortion and displacement to video so that video textures become flowing layers of pixels looks absolutely beautiful. Andrew Benson of Cycling ‘74 has only just begun playing with this in Jitter using GLSL shaders, and already the results are really compelling. (For a simpler example that looks more like the compression artifact technique we’ve seen recently, have a look at the second video – though, personally, I like the more sophisticated, layered approach of the video at top. This is going some very cool places.)
This is a Jitter patch, but would be simple enough to port to code for Processing, FreeFrameGL (which implements shader code), or other tools, too, in case you can’t bear being away from your moshness.
I recently posted a couple of Jitter patches and shaders that implement a really basic optical-flow based distortion effect (think realtime datamosh, among others) on the Jitter forums:
New Distortions [message #169387]
Super fun to play with. Another excuse to dance in front of your computer, or a way to convince others to do so. PUSHING PIXELS.
The idea came out of some R&D that I was doing for a current collaborative live video thing I’m working on, and the implementation from some random notes on websites scattered around the internets. It all kinda happened by accident while I was distracted… Hope you enjoy.
I’m actually quite interested to see a performance comparison with OpenCV tools for the same technique – and, likewise, how computer vision routines in general can be warped to these sorts of aesthetic purposes. Thanks for this, Andrew, and if this inspires other folks to develop this more, I’d love to see the results / patches / code!