Percussa micro super signal processor

Examples of OpenCV routines from the Processing library documentation. Of course, it’s up to you to build on these techniques and make art.

It’s a relatively easy thing for computers to “see” video, but “computer vision” goes a step further, applying a wide range of techniques by which computers can begin to understand and process the content of a video input. These techniques tend toward the primitive, but they can also produce aesthetically beautiful results. The best place to start with computer vision has long been the standard library, OpenCV. A free (as in beer and freedom) library developed by Intel and with ongoing use in a variety of applications, OpenCV is a terrific, C/C++-based tool not just for things like motion tracking, but video processing in general. OpenCV gets a lot of support in the C++-based OpenFrameWorks, but that doesn’t mean Java and Processing have to be left out of the fun. (In fact, I’d still recommend starting with Processing for OpenCV, because other tasks remain easier in Java.)

I expect to have us working a lot with OpenCV over the course of 2009. To get the ball rolling, I’m pleased to welcome guest writer Andy Best, a self-described “Interaction Designer/visualist/musician/general technology tinkerer.”

Andy Best

Before you get started, be sure to grab the library, following the instructions here (and check out the quite-nice, clear examples):
OpenCV library

Note that this will also work with standard Java projects, not just Processing. I’ll also be doing some cross-platform testing on Mac, Windows, and Linux with video processing using this library and GStreamer/GSVideo for Processing.

Here’s Andy:

Learning OpenCV with Processing

A relatively new library for Processing is the OpenCV library. It ports over some of the functions from the OpenCV library to work with Processing and is designed for realtime video processing.

In this tutorial I am going to cover some of the basic functionality of the OpenCV library for Processing in order to create an effect for live video, so you will need to install the library first. Instructions can be found on the site that I linked to above. I shall try to stick to OpenCV library functions and native Processing functions to make things a bit easier. This means that there is lots of room for optimisation in the final product.

The Processing implementation of OpenCV contains functions for image and video capture, image manipulation, blob detection and object detection.

A basic OpenCV sketch framework that captures frames from the camera and displays them onscreen is shown below:

[sourcecode language=’java’]
import hypermedia.video.*; // Imports the OpenCV library
OpenCV opencv; // Creates a new OpenCV Object

void setup()
{

size( 320, 240 );

opencv = new OpenCV( this ); // Initialises the OpenCV object
opencv.capture( 320, 240 ); // Opens a video capture stream

}

void draw()
{

opencv.read(); // Grabs a frame from the camera
image( opencv.image(), 0, 0 ); // Displays the image in the OpenCV buffer to the screen

}
[/sourcecode]

The OpenCV library has an image buffer. In order for an operation to be performed on an image, it needs to be copied into the buffer. This can be done like below:

[sourcecode language=’java’]
PImage anImage;
opencv.copy(anImage);
[/sourcecode]

Once the image is in the buffer, OpenCV functions can be performed on it. Operations like capturing a camera frame will put the camera frame into the buffer.

A useful function is absDiff() (absolute difference). This calculates the difference between two images, the one in the regular buffer and one in a second buffer (which an image can be stored in using the ‘remember()’ function. absDiff can be used to perform a number of useful functions. One of these is background subtraction:

[sourcecode language=’java’]
import hypermedia.video.*; // Imports the OpenCV library
OpenCV opencv; // Creates a new OpenCV Object

void setup()
{

size( 320, 240 );

opencv = new OpenCV( this ); // Initialises the OpenCV object
opencv.capture( 320, 240 ); // Opens a video capture stream

}

void draw()
{

opencv.read(); // Grabs a frame from the camera
opencv.absDiff(); // Calculates the absolute difference
image( opencv.image(), 0, 0 ); // Display the difference image

}

void keyPressed()
{
opencv.remember(); // Remembers a frame when a key is pressed
}
[/sourcecode]

When a key is pressed, the current frame is stored in memory and is used when calculating the absolute difference. This means that only changes between the stored frame and the current frame are shown.

Another use for absolute difference is for detecting movement. If the absolute difference is calculated from the current frame and the frame before, only the movement is shown:

[sourcecode language=’java’]
import hypermedia.video.*; // Imports the OpenCV library
OpenCV opencv; // Creates a new OpenCV Object

void setup()
{

size( 320, 240 );

opencv = new OpenCV( this ); // Initialises the OpenCV object
opencv.capture( 320, 240 ); // Opens a video capture stream

}

void draw()
{

opencv.read(); // Grabs a frame from the camera
opencv.absDiff(); // Calculates the absolute difference
image( opencv.image(), 0, 0 ); // Display the difference image
opencv.remember(); // Remembers the current frame
}
[/sourcecode]

This movement image can be made more useful by converting it to greyscale and running it through a threshold filter. This will convert the pixels to black and white, which means that you can see if movement has occurred in a specific area of the image by checking to see if it contains white pixels. The standard threshold function takes 1 argument that is a value between 0 and 255. All pixels with a brightness equal to or above this value will be white and all others will be black.

[sourcecode language=’java’]
opencv.convert( OpenCV.GRAY );
opencv.threshold(20);
[/sourcecode]

We’re going to use this to create an effect that will overlay a coloured trail of the movement on top of a camera input.

Firstly, let’s create the trails.

[sourcecode language=’java’]
import hypermedia.video.*; // Imports the OpenCV library
OpenCV opencv; // Creates a new OpenCV Object
PImage trailsImg; // Image to hold the trails
int hCycle; // A variable to hold the hue of the image tint

void setup()
{

size( 320, 240 );

opencv = new OpenCV( this ); // Initialises the OpenCV object
opencv.capture( 320, 240 ); // Opens a video capture stream
trailsImg = new PImage( 320, 240 ); // Initialises trailsImg
hCycle = 0; // Initialise hCycle
}

void draw()
{

opencv.read(); // Grabs a frame from the camera
opencv.absDiff(); // Calculates the absolute difference
opencv.convert( OpenCV.GRAY ); // Converts the difference image to greyscale
opencv.blur( OpenCV.BLUR, 3 ); // I like to blur before taking the difference image to reduce camera noise
opencv.threshold( 20 );

trailsImg.blend( opencv.image(), 0, 0, 320, 240, 0, 0, 320, 240, SCREEN ); // Blends the movement image with the trails image

colorMode(HSB); // Changes the colour mode to HSB so that we can change the hue
tint(color(hCycle, 255, 255)); // Sets the tint so that the hue is equal to hcycle and the saturation and brightness are at 100%
image( trailsImg, 0, 0 ); // Display the blended difference image
noTint(); // Turns tint off
colorMode(RGB); // Changes the colour mode back to the default

opencv.copy( trailsImg ); // Copies trailsImg into OpenCV buffer so we can put some effects on it
opencv.blur( OpenCV.BLUR, 4 ); // Blurs the trails image
opencv.brightness( -20 ); // Sets the brightness of the trails image to -20 so it will fade out
trailsImg = opencv.image(); // Puts the modified image from the buffer back into trailsImg

opencv.remember(); // Remembers the current frame

hCycle++; // Increments the hCycle variable by 1 so that the hue changes each frame
if (hCycle > 255) hCycle = 0; // If hCycle is greater than 255 (the maximum value for a hue) then make it equal to 0
}
[/sourcecode]

The trails are created by blending the movement image with the contents of the trails image each frame. This image is then blurred and darkened so that the trails blur and fade out over time.

Before the trails image is drawn to the screen, the “tint” function is used, which is an inbuilt function of Processing. In order to cycle colours with ease, the colour mode is changed to HSB (hue, saturation and brightness) so that the hue can be cycled with the hCycle variable.

Finally, we can overlay this coloured trails image on the original webcam image by creating a holder for the original camera image and blending it with the trails image.

[sourcecode language=’java’]
import hypermedia.video.*; // Imports the OpenCV library
OpenCV opencv; // Creates a new OpenCV Object
PImage trailsImg; // Image to hold the trails
int hCycle; // A variable to hold the hue of the image tint

void setup()
{

size( 320, 240 );

opencv = new OpenCV( this ); // Initialises the OpenCV object
opencv.capture( 320, 240 ); // Opens a video capture stream
trailsImg = new PImage( 320, 240 ); // Initialises trailsImg
hCycle = 0; // Initialise hCycle
}

void draw()
{

opencv.read(); // Grabs a frame from the camera
PImage camImage; // Creates an image and
camImage = opencv.image(); // stores the unprocessed camera frame in it

opencv.absDiff(); // Calculates the absolute difference
opencv.convert( OpenCV.GRAY ); // Converts the difference image to greyscale
opencv.blur( OpenCV.BLUR, 3 ); // I like to blur before taking the difference image to reduce camera noise
opencv.threshold( 20 );

trailsImg.blend( opencv.image(), 0, 0, 320, 240, 0, 0, 320, 240, SCREEN ); // Blends the movement image with the trails image

colorMode(HSB); // Changes the colour mode to HSB so that we can change the hue
tint(color(hCycle, 255, 255)); // Sets the tint so that the hue is equal to hcycle and the saturation and brightness are at 100%
image( trailsImg, 0, 0 ); // Display the blended difference image
noTint(); // Turns tint off
colorMode(RGB); // Changes the colour mode back to the default

blend( camImage, 0, 0, 320, 240, 0, 0, 320, 240, SCREEN ); // Blends the original image with the trails image

opencv.copy( trailsImg ); // Copies trailsImg into OpenCV buffer so we can put some effects on it
opencv.blur( OpenCV.BLUR, 4 ); // Blurs the trails image
opencv.brightness( -20 ); // Sets the brightness of the trails image to -20 so it will fade out
trailsImg = opencv.image(); // Puts the modified image from the buffer back into trailsImg

opencv.remember(); // Remembers the current frame

hCycle++; // Increments the hCycle variable by 1 so that the hue changes each frame
if (hCycle > 255) hCycle = 0; // If hCycle is greater than 255 (the maximum value for a hue) then make it equal to 0
}
[/sourcecode]

Since Processing doesn’t allow us to perform the tint function on an image directly, the trails image is rendered to the screen with a tint, then tint is turned off and the original image is blended with it.

That pretty much wraps it up for this tutorial, I hope that you found it useful. Be sure to let me know what you do with it! I’ve got another OpenCV tutorial on my site that shows how to use the difference image to perform collision detection with objects. You can check out my work and other processing tutorials here. Thanks to Peter Kirn for putting this out to the masses!