Ableton has published a tech note on reducing CPU load on Macs with Apple Silicon (M1/M2/M3). This is both a good tip and good news: smaller buffer sizes, which also translate to lower latency, now often reduce CPU usage.

Computer audio processing is never truly real-time. You have an audio buffer that is processed prior to being output so that you can hear. Larger buffer times mean a longer delay – greater latency. But in order to cope with greater CPU load without producing dropouts in the audio buffer, which sounds really unpleasant (as skips and pops in the sound), you can try increasing the buffer size.

Here’s where things get interesting. On Intel architectures, the relationship of larger buffer sizes to CPU performance is generally pretty direct: increase the buffer, and reduce CPU load. (Note that this is not quite what I said above: here, we’re actually talking about a CPU load reduction that accompanies the greater buffer size.)

From Ableton help:

Intel:
• Smaller buffer sizes may result in higher CPU usage
• Larger buffer sizes may result in lower CPU usage
Apple Silicon:
• Smaller buffer sizes may result in lower CPU usage
• Larger buffer sizes may result in higher CPU usage

On Apple Silicon, presumably thanks to optimization work done by Apple on the overall architecture, now over many tasks the reverse is true. This means you can not only reduce buffers to reduce latency (which is good), but also that you can reduce CPU load in the process (which is also good). It means that instead of juggling the two – low latency and low CPU – you now get both at once. We’re talking here buffer sizes of “128 or lower” – typically you’ll just set the buffer to exactly 128 samples or possibly (living on the edge) 64.

Thanks to emptyvessel for spotting this one! (check out some awesome sound design work, which I’ve written about before – friend o’ the site!)

I don’t want to misrepresent Ableton’s own statement on this; you should read their article with its caveats. If you figure out a way to max out the CPU – and, frankly, that’s not even that easy with audio these days on my M1 Max – you may need to increase buffer sizes again. But that’s only as you really push the envelope of the CPU.

Here’s the full article, which also goes into how to configure settings (worth a review) and what Ableton has previously shared on performance. In my usage, I’ve found the same advice useful with other software, like Apple’s own Logic Pro, so this is not particularly Ableton-specific.

Reducing the CPU load on Apple Silicon computers [Ableton Help document]

This technical information is consistent with why a lot of us are so happy with Apple Silicon-based machines for audio versus Intel Macs. The performance is just exceptionally solid for audio use. After years of complaining about laptops onstage … I think it makes a pretty good argument for bringing laptops onstage. You can stick the thing in a corner and still focus on controllers and live performance, solving the “I’m staring at a computer” issue. But it resolves what I think was the main fear for a lot of us, which was having unpredictable reliability and responsiveness as we use software instruments and effects.

Ironically, this might even make a good argument for why you should buy the controller version of Push 3 and use your Mac for horsepower, rather than standalone mode. That’s been my tendency, just because I can do so much with an M1 Pro/Max or greater MacBook Pro, and then have full compatibility with third-party effects. It’s not to say that I don’t also see the advantages of going standalone – I definitely do. But the “controller plus computer” formula is competitive in a whole lot of situations, which I think is worth saying. As for Push 3, everything I’ve previously written about making setups that you can use in standalone are just as applicable to situations where you want to use Push 3 as a controller and not touch the computer trackpad or keyboard or look at the display. Cough, Max for Live devs, this means you!

Let us know how buffer settings work for you on your Apple Silicon machines and with different DAWs, audio drivers, etc. There are a lot of variables here, so it’s tough to reduce this to simple advice. I look forward to hearing from you in comments.

And yes, I’d be curious to investigate this on ARM/Linux platforms, too, especially as ARM-based offerings continue to grow.

Image at the top courtesy Apple, showing unified memory and the 3nm chip itself from the M3 announcement.