Lately, I’ve been looking for ways to embrace creative and technological restrictions, to dial back all the high-tech choices possible in making motion. When a DV cam died, it seemed the perfect opportunity to examine a still camera as a means of generating footage. I picked up my Sony DSC-W1, a basic but decent 5 megapixel point-and-shoot. The DSC-W1 has a multi-burst mode that takes a stream of pictures at high speed. I loved this retro, flip-book approach. But I hadn’t used it in so long that I forgot why I had abandoned multi-burst in the first place.
In camera, multi-burst shoots a nice, 16-frame movie. But load the image off your camera, and you get all 16 pictures pasted together into a single, 1280 x 960-pixel image. Looks interesting, but no motion. A quick search revealed other digicam users (and apparently Sony owners aren’t alone) had the same problem. Solutions: 1) manually select each image (uh, no), 2) use a GIMP script programmed in Perl (two were readily-available, but not quite what I needed, and mucking with GIMP and someone else’s Perl is not my idea of a good time), or 3) write a script for Photoshop CS. The point here was to engage the creative process, not create meaningless work for myself.
Sometimes the best solution to a problem isn’t a black box: it’s attacking a problem head-on in the simplest, most elegant way possible. And that’s why Processing is such a joy. (See previous CDMotion coverage.) The solution is so simple, in fact, that it could be a good exercise for people learning image manipulation in the tool, even if they’ve never coded before.
The trick is simple: calculate x and y position in pixels for each slice, and then display only that on the screen. Processing shows the “movie” at whatever framerate I want, by advancing through the slices. For exporting, I can simply capture each frame as an image file using the saveFrame() function. The Sony shoots each still at 320×240, so I permanently fixed that as the resolution. Calculating which row or column you’re on is a job for the modulo (%) function, since there are four images per row and four rows.
PImage dress;
int frame = -1;
int sliceX;
int sliceY;
void setup() {
size(320,240);
dress = loadImage("dress.jpg");
frameRate(8);
}
void draw() {
if ( frame < 15) {
frame++;
drawPic();
}
else {
noLoop();
}
}
void drawPic() {
sliceX = (frame % 4) * 320;
float framef = float(frame);
int row = floor(((framef / 16.0) * 4) % 4) ;
sliceY = row * 240;
copy(dress, sliceX, sliceY, 320, 240, 0, 0, 320, 240);
saveFrame();
println(frame);
println("X: " + sliceX + ", Y: " + sliceY);
}
The important code is copy(image, origin x, origin y, width, height, output x, output y, output width, output height)
This copies pixels from the image to the display array.
saveFrame() saves the current frame as an image and automatically increments the filename, so you get a folder full of screen-0001, etc. This is useful if you want to manipulate images independently, process them in another program, etc.
Using Daniel Shiffman’s MovieMaker library for Processing, you can also output QuickTime movies.
noLoop() ends execution of the code. I remove noLoop() and replace it with frame = 0 if I want to just playback the movie and don’t need to save frames are movies.
Here’s that modified code:
import moviemaker.*;
PImage dress;
MovieMaker movieExport;
int frame = -1;
int sliceX;
int sliceY;
void setup() {
size(320,240);
dress = loadImage("dress.jpg");
int rate = 8; // just for convenience
movieExport = new MovieMaker(this,width,height,"dress.mov", MovieMaker.JPEG, MovieMaker.BEST,rate);
frameRate(rate);
}
void draw() {
if ( frame < 15) {
frame++;
drawPic();
}
else {
noLoop();
movieExport.finishMovie();
}
}
void drawPic() {
sliceX = (frame % 4) * 320;
float framef = float(frame);
int row = floor(((framef / 16.0) * 4) % 4) ;
sliceY = row * 240;
copy(dress, sliceX, sliceY, 320, 240, 0, 0, 320, 240);
loadPixels();
movieExport.addFrame(pixels,width,height);
}
You use addFrame() in place of saveFrame(), so you get one movie instead of lots of individual TIFF files. loadPixels() is necessary for any pixel array operations; in this case, that’s what readies the pixel buffer for creation of the movie. Note that you also need to add finishMovie() in addition to noLoop(), in order to complete creation of the movie file.
Having this as Processing code takes what was a solution to a simple problem and introduces new creative possibilities. (Hey, that’s the interesting thing about supposed “restrictions.”) Muck with the source pixel coordinates, and you can intentionally get glitchy results. You can take frames out of order. You can change framerate. And it goes on from there. On the useful side, you can control upsampling to a higher resolution; new to Processing 0124, adding smooth() to the code will add higher-quality scaling with the default JAVA2D renderer.
I can take either the stills, or movies — or both — and bring them into another tool, like Jitter or Resolume. In fact, in case you’re wondering why I didn’t start in something like Jitter, I have to admit that for a somewhat linear problem like this, I find it easier to head for the code. With more complex iterations through the image data, I’d feel even more strongly about that. I can always shift to Jitter for setting up live processing later. I refuse to come down on one side or the other in the patching vs. coding debate; I just use whatever feels easiest at the time, and in that I think I’m not alone.
For now, I’m happy with control over the frames. But it is generating some interesting ideas. It’s amazing what’s possible with just 16 frames.
I’ll have to check in again as I’ve gone through more Processing sketch iterations and built some work out of this.
But we spend so much time talking about super-fancy projects with Processing, it’s easy to forget it can be useful for simple stuff, too.
If you’re interested, check out the stitched-together movie output.