This week, the RIAA is suing the generative AI music startups behind Suno and Udio, claiming they’ve illegally trained their models on copyrighted music. The lawsuit reveals that you can enter simple prompts and get what sounds like near-perfect clones of existing songs, down to the melody, harmony, lyrics, and arrangements. And you can hear those results for yourself.

The RIAA’s lawsuit represents music from Sony Music Entertainment, UMG Recordings, Inc., and Warner Records; see the press release. They point to specific examples:

There’s no legal precedent for AI and copyright; the current legal history is around sampling. For some background on digital sampling:

How the Compulsory Licensing System Has Impacted Sampling in Today’s Music Industry and Potential Calls for Reform [USC]

Digital Music Sampling and Copyright Policy – A Bittersweet Symphony? Assessing the continued legality of music sampling in the United Kingdom, The Netherlands, and the United States – Melissa Hahn’s excellent overview, PDF

So this case against Udio and Suno seems to have some real weight: it’s easily understandable to the public and lay people, it makes a fairly clear connection to the source material (maybe even clearer than with digital sampling), and so it could be a breakthrough both in public perception and legal standing.

The results should be no surprise to anyone who understands how the technology works. Currently deployed prompted “AI” for music is not capable of “generating” music in the creative sense; it’s mashing together large data sets. “Ah, but aren’t humans doing the same?” Well, no, they aren’t – have you ever listened to a young child sing, often before they can even speak? Music cognition is based on complex networks of perception and creative impulses – you can read up on that – and those work fundamentally differently from how these musical models in AI work. Essentially, large data-set generative AI derives patterns without perception of meaning, let alone creative agency. That’s not to say it can’t be interesting – but it’s not the same as human creativity, and if it spontaneously spits out a Jason Derulo song, that’s by design. It also couldn’t do that without the Jason Derulo song (and a lot of other material) feeding it data in the first place.

No, what’s really surprising here is that Suno and Udio seem not to have even tried to cover their tracks – or that people thought these tools were “making” music in the first place. Most experienced observers had figured they were scooping up unauthorized data, because of the results. That unfortunately led to some “journalism” where musicians who didn’t understand how AI works would listen to the results and compare them to human players – which is a little like playing a record to someone who’s never seen electricity and claiming it’s witchcraft. In reality, these systems rely on training the model with large amounts of data people want – that’s the copyrighted material – and possibly even carefully tuning the model and interface to get those results. And in the pursuit of giving people especially satisfying results, these startups may have given up their own game.

All of that is a fancy way of saying, with a short text prompt, you can get All I Want For Christmas Is You nearly verbatim. (Oh, great. That was definitely worth burning the planet’s ecosystemmore of that damned Mariah Carrey song. Can the court give us an injunction against both the AI and the industry while the planet and humanity heal? Bah humbug etc. etc.)

Journalist Jason Koebler has a great write-up with examples and a breakdown of the lawsuit:

Listen to the AI-Generated Ripoff Songs That Got Udio and Suno Sued [404media]

He has video, which may come in handy as it looks like these services are scrambling to patch some of this. Listening to the mangled medleys it spits out of The Beach Boys is maybe even a better example. Or listen to how “Great Balls of Fire” sort of accidentally samples something else.

With a short text prompt, you can get All I Want For Christmas Is You nearly verbatim. (Oh, great. That was definitely worth burning the planet’s ecosystem.

Billboard has a great write-up, and the creators responded in a statement to senior writer Kristin Robinson:

Suno’s mission is to make it possible for everyone to make music. Our technology is transformative; it is designed to generate completely new outputs, not to memorize and regurgitate pre-existing content. That is why we don’t allow user prompts that reference specific artists. We would have been happy to explain this to the corporate record labels that filed this lawsuit (and in fact, we tried to do so), but instead of entertaining a good faith discussion, they’ve reverted to their old lawyer-led playbook. Suno is built for new music, new uses, and new musicians. We prize originality.

That would be a compelling argument, except that you can use the tool yourself and reproduce similar results. (Oops.) You’ll notice they try to redirect to the use of specific artists in prompts, and not the case of what the training set was or how the training set and output are related. But as Robinson observes, a Suno had already admitted to using unlicensed materials – in a Rolling Stone profile, investor Antonio Rodriguez said, “Honestly, if we had deals with labels when this company got started, I probably wouldn’t have invested in it. I think that they needed to make this product without the constraints.”

And regurgitating pre-existing content is exactly what the algorithm does. It spontaneously just cribs “Dancing Queen” lyrics – spitting them out over another melody. It doesn’t require artist prompts, either: enter “1950s rock and roll12 bar bluesrhythm & bluesrockabillyenergetic male vocalist” and you get the entire hook for Johnny B. Goode.

That remains the ongoing threat of generative AI – because the whole system was originally engineered to accurately resynthesize inputs, to quote Douglas Adams, “is this sort of thing going to happen every time we use the Improbability Drive?” / “Very probably, I’m afraid.”

The presence of this lawsuit does hopefully help the public understand what AI actually is, and why it’s so important that artists and the public give their consent for its use – or even consider that they don’t want to be a training set at all. But it’s also unsettling for the same reason. Suno and Udio didn’t even bother to protect themselves from litigation, but won’t it be fairly easy to avoid something this blatant?

Also, will individual artists outside big systems have much control at all? Universal Music Group just announced a deal with BT-founded Soundlabs to build more “artist-friendly” solutions. But are smaller artists likely to be left out of both legal protections and deals they can monetize, much as they’re currently marginalized from streaming services? (For that matter, how confident are artists on majors like UMG in the contractual control they have over these deals? We’ll have to investigate that with some of the artists.) The lawsuit itself says that this kind of generated content will “saturate the market with machine-generated content that will directly compete with, cheapen and ultimately drown out” the original music. Maybe other services will be different, but how exactly? (That is, other than then the industry is making money off those services, whereas here, it isn’t.)

At the very least, this could – emphasis on “could” – be a turning point for the perception of AI in music. Obviously, “AI” didn’t just magically figure out how to write a Mariah Carey Christmas banger. And whatever happens next, understanding that matters.

As a society, our record is already pretty poor – decades later, many people still have a tenuous grasp of sampling. The recognition that the popular “amen break” originated with The Winstons’ drummer, Gregory Coleman came largely after Coleman had already died, and without any financial renumeration for his labor.

Or let’s talk about Clyde Stubberfield, James Brown’s drummer. It’s worth revisiting the entire documentary Copyright Criminals; I watched this in Beirut last year during Irtijal Festival. That’s partly because of how dated a lot of the perspectives seem, produced in the height of the Creative Commons love affair with sampling, and very much pre-AI. But Stubberfield himself was mystified by the phenomenon – cued up from that moment:

See also later in the documentary, which talks about masking sampling (cough, what did not happen with the AI startups), and then more from Stubberfield. All due respect to legends like Hank Shocklee, Stubberfield sort of upstages the whole documentary – “my music is my life; my music is my breathing.”

But everything in this documentary – the ability of sampling to remake and reimagine music, the new musicianship formed from working with samplers, the political message – is missing in generative AI prompting in tools like Suno and Udio. That’s not a blanket criticism of AI. But in this case, “AI” is being used as a moniker for energy-intensive ways of regenerating slightly botched clones of Mariah Carey.

And if anything deserves a bit of protest, it seems that might – by the full spectrum of AI critics and advocates.

Side note – there’s probably a lot more of this to come. Via Benn Jordan, here’s a case of the apparent use of generative AI to spoof a Bandcamp artist account. (idadeerz.bandcamp.com is the correct one; Bandcamp resolved the issue. Oh and – great music!)