Pixel Bender from Adobe is capable of developing all manner of filter eye candy, like these selections from users at the Pixel Bender Exchange (many of them free to use).

In case you didn’t watch comments on my Pixel Bender preview, there was some good discussion, including Kevin Goldsmith, who manages the Adobe Image Foundation group; he blogs at digital-motion.net and specifically on these technologies at blogs.adobe.com (and he’s a musician, too).

I overstated the importance of Pixel Bender being GPU accelerated. First off, of course, you really don’t care where code executes, whether on the CPU or GPU – you care about whether it’s fast, and whether it does what you need. In this case, different CS4 tools accomplish that differently:

  • After Effects CS4: GPU-accelerated
  • Photoshop CS4: GPU-accelerated (as is the new pan/rotate/zoom feature, as I indicated)
  • Flash 10 (in CS4 Suites): CPU, multi-threaded

So, it was effectively the Flash answer I got wrong. That’s not to say Flash doesn’t use the GPU; it accelerates some drawing routines with the GPU (as, incidentally, Java and by extension Processing have done in the past, even in 2D).

Tinic Uro, an engineer on the Flash team, explains that the reason is compatibility:

Running filters on a GPU has a number of critical limitation. If we would have supported the GPU to render filters in this release we would have had to fall back to software in many cases. Even if you have the right hardware.

In other words, it was better to focus on CPU performance and multi-threading, since not everyone would use the GPU anyway. If you’re interested in performance in Flash, you should read Tinic’s whole story (also linked from comments). Notably, some PC execution (SSE2) will benefit, while the PowerPC, because it runs in interpreted mode, becomes an even more second-class citizen. (I’m not going to editorialize there, as this is increasingly becoming the case.)

Adobe Pixel Bender in Flash Player 10 Beta [Kaourantin.net]

Kevin also makes another point, which is that assuming the GPU is always better is a mistake.

The chips on graphics cards (GPUs) are extremely efficient processors capable of doing lots of math in parallel and have the benefits of fast local memory with a super fast connection to the processor. This makes them ideal for the kinds of things that Pixel Bender does. However, this super-efficient processor is connected to the main computer processor by a not-so-fast connection, the bus. Moving data on and off of the GPU is expensive relative to doing things on the GPU directly.

What all these things have in common: parallelism. The application, where you need to deploy (in the case of things like Flash, at least), and the relative performance bottlenecks on each will dictate which you use. For instance, in audio, despite some intriguing experiments, I expect we’ll see more multi-threading on the CPU and not GPU because of latency cost, difficulty of programming, and compatibility issues. But for computer vision, I could imagine GPU-native routines being a lot more interesting (which, in turn, frees up you CPU for stuff like audio).

CPU, GPU, multi-core [Kevin Goldsmith]

I think in the case of Pixel Bender, the simplicity of what you’re likely to do with Flash would make it almost a non-issue. (After Effects could be a different story, assuming you start layering things on that might not necessarily be real-time, on your high-end studio hardware.) But what about other things? This deserves its own post, but the CPU versus GPU arguments are back with a vengeance. The problem I have is, while, yes, CPUs are getting better at adding lots of parallelism and cores, at the same time the GPU architectures are moving closer to the processor (witness the AMD/ATI deal), which could reduce the bottlenecks that give CPUs an edge. One of the main arguments made by the “GPU is dead” crowd is that general-purpose applications on the GPU (GPGPU), specifically via the OpenCL API, will make programming for the GPU more like programming for the CPU – but that would seem to suggest more longevity for GPUs, not less. Generally, what we’re seeing is a convergence. And maybe Pixel Bender is a good place to look at that: you have a single programming paradigm, but you deploy on the processor (CPU or GPU) that makes the most sense at the time, and then you try to make it run as fast as you possibly can in each context.

For those of you bending pixels, the good news is your tools are getting better. What it means for live visuals is, we may see a blurring of applications, but more powerful CPUs and GPUs should mean more sophisticated video codecs, better video performance, and more sophisticated real-time effects that are easier to create.

If you’ve worked on a Pixel Bender creation you want to share, or seen a good one, do share it with us! (And Core Image / GLSL / HLSL nuts, etc., feel free to strike back with some of your own favorites.)

Pixel Bender Exchange [Adobe Communities]