“AI” in the popular imagination has become a vision of machines making the music. Rhythmiq is a new plug-in that’s the opposite – software that promises to let you do more with your own grooves.

https://youtu.be/eqM4MDmBjbI

Rhythm is one of the areas where machine learning seems already to excel. The science around these AI techniques at the moment focuses on just this sort of pattern recognition – it’s powerful for analyzing time-domain nuance, like grooves. So for anyone who complains about the cookie-cutter impact of “on the grid” music software, AI might actually offer some hope. “The grid” no longer needs to be a mechanical, perfect division of the beat or repetitive groove and swing. You can train machines on recognizing more sophisticated patterns, and producing variations accordingly.

I’ll go into a deep dive as far as how Rhythmiq works at another time, but you can certainly count it as an early attempt to chart music software into just these waters. And yeah, the whole idea here is to get more out of your own loops. Accusonus have even produced an elegant-looking interface with hands-on controls so you can dial in what you want interactively.

The basic workflow is this:

Add a loop. Yep, you can use your own sounds.

Make variations on that loop, by turning an on-screen knob (or mapping that to hardware) – essentially guiding the software algorithms where you want them to go.

Play the variations in real-time as you jam, even without looking at the screen, for fills, breaks, build-ups, drops, and, uh, whatever else you want as you play.

Yep, it has controls on it. So this isn’t just a ghost in the shell – the whole idea is to give you something you can play. It’s machines as more interactive, not less.

This is in stark contrast to the primitive way you might be tempted to work with loop- and sample-driven software and hardware. That use case is more like: start a loop, let that loop play repetitively forever, and attempt to jam over top of that loop as it gets progressively more annoying. (Whee!) Sure, that works really well for music that relies on repetitive patterns – behold, the mystery of the techno 4/4 kick. But it applies pretty poorly for everything else.

This also demonstrates that the real-world applications of AI may be more sophisticated, and more appealing to actual musicians, than some of the popular fantasy. We’ve been told for years that AI needs to be autonomous – that it needs to replace us as humans, or come up with ideas when we’re uninspired. If you talk to actual data scientists working in real-world applications of machine learning, though, they will routinely still refer to their work as “AI” without being concerned with this autonomy. Why? Well, as far as I’ve been able to ascertain:

  • a) because it’s not presently possible to make that sort of autonomous machine code, and
  • b) because there isn’t necessarily a real world demand for it.

This should particularly obvious in music, however. I think musicians want the machines to make the music for them in the same way that they want video games to play themselves, or to watch someone else doing it.

No, if you’re willing to invest in music technology, odds are that you do have some inspiration and ideas and you do actually enjoy, you know, making music yourself. Where the frustration comes in is that software works in ways that are often pretty foreign to the way we hear music. And that’s why Rhythmiq is part of a promising direction in adding intelligence to the music.

In short, this isn’t about making you dumber. It’s about making your music software smarter – more like you. Even as beginners, you are already pretty damned smart when it comes to understanding rhythm. (Seriously. Humans are amazing.)

Anyway, that’s the concept. Actually making this work involves some deep research and technology on one side, and requires some extensive testing in user music making on the other. I’ll be investigating both sides of that shortly. (I’ve already started looking at pre-release versions of the software.)

One note – this does still rely on audio content. That means you do have some of the audible artifacts of deriving portions of the sound from the larger sound material, which gives the loops some of that lo-fi, IDM sound – which you might love or not. It seems there is also potential in driving variations in MIDI (or other timing information) alone, and then triggering slices in a more conventional way.

But this is a huge leap forward for Accusonus’ technology, and delivers on some of what we saw previously in their Regroover plug-in. (See links below, which also go into some of the AI behind this.)

Also, stay tuned, as I’m part of a team continuing to explore the applications of AI and music. Following our work with GAMMA Festival in St. Petersburg, Russia, we head next to a partnership in November with MUTEK.JP in Tokyo, again pairing data scientists and technologists with musicians and curators and lots of people fitting several of those descriptions at once.

Rhythmiq is available today. It’s US$99 through the end of October, $149 after that. And you can try a 14-day test version, so you don’t have to trust me or the developers or anyone else about how well it works; you can find out yourself.

You’ll be better off in certain hosts than others – yep, try Reaper and its free evaluation version if all else fails. According to Accusonus:

Compatible and fully tested: Ableton Live 10, Apple Logic Pro X

Compatible: FL Studio 20, Presonus Studio One, Cockos Reaper

https://accusonus.com/products/rhythmiq

There are a couple of marketing videos, but I actually think you should start with the playlist of tutorial videos to see how this works – especially if you’re trying the demo:

https://www.youtube.com/playlist?list=PLB4-ankwrMgtzlDD3bUORWEsV0NfM-xAg

Here are the developers talking a bit about their thinking going into this, but I’ll try to get a little deeper with them about how it all works and why go this way:

https://www.youtube.com/watch?v=taWPBVb7MeU

Previously:

https://cdm.link/2017/11/try-ai-remixing-regroover-tips-exclusive-sounds/
https://cdm.link/2017/11/accusonus-explain-how-theyre-using-ai-to-make-tools-for-musicians/
https://cdm.link/2019/08/making-ai-stage-gamma-lab/