In April 2024, a guy named Willonius Hatcher typed a prompt into Udio — an AI music generator he'd probably learned about that week — and created "BBL Drizzy," a parody track aimed at Drake. It went viral during the Drake-Kendrick Lamar feud, racking up 23 million views on Twitter and 3.3 million SoundCloud streams in its first week.
A few months later, an Austrian producer called Butterbro used Udio to create "Verknallt in einen Talahon" (In Love with a Talahon). It became the first AI-generated song to chart in the German Top 50.
These are fun stories. They're also warning shots.
The technology that lets anyone create a viral hit with a text prompt is the same technology that threatens the livelihood of every working musician on earth. And the music industry — having watched visual artists get steamrolled by Midjourney and Stable Diffusion — isn't planning to go quietly.
## What Suno and Udio Actually Do
Let's start with the products, because they're both impressive and terrifying depending on your relationship to music.
**Suno** launched in early 2024 and quickly became the most popular AI music generator. You type a text prompt — genre, mood, lyrics, vibe — and get back a complete song with vocals, instrumentation, and production. The output quality is startlingly good. Not "good for AI" good. Actually good. Tracks that you could play for someone without mentioning AI and they wouldn't question it.
Suno raised significant funding from Lightspeed Venture Partners and other investors. Their approach is consumer-first: make it dead simple for anyone to create music. No musical training. No instruments. No recording studio. Just words.
**Udio** was founded in December 2023 by four former Google DeepMind researchers, including CEO David Ding. Backed by Andreessen Horowitz (a16z), will.i.am, Common, and Instagram co-founder Mike Krieger, Udio produces music that's often indistinguishable from human-created tracks. Their technology generates music from text prompts with fine-grained control over genre, style, and structure.
Both platforms initially offered generous free tiers — Udio let users generate 600 songs per month for free at launch. The strategy is classic platform economics: get users hooked on creation, then monetize through subscriptions and premium features.
The technical approach behind both tools hasn't been fully disclosed, but the general architecture is similar to image generation: train a model on an enormous dataset of music, learn the statistical patterns of genre, melody, harmony, and production, then generate new compositions that follow those patterns. The quality of the output depends directly on the quality and quantity of the training data.
And that's where the war begins.
## The Lawsuits
In June 2024, the Recording Industry Association of America (RIAA), along with Universal Music Group (UMG), Sony Music Entertainment, and Warner Music Group — the three major record labels that collectively control roughly 70% of the global recorded music market — filed separate copyright infringement lawsuits against both Suno and Udio.
The allegations are straightforward: both companies trained their AI models on copyrighted music without permission. The labels claim the training datasets included massive catalogs of copyrighted songs, and that the AI systems can generate output that closely mimics specific copyrighted works.
The RIAA's press release was blunt: "These services are built on the backs of our members' work. Training AI on copyrighted music without authorization is infringement, period."
The damages sought are staggering. Under US copyright law, statutory damages can reach $150,000 per work infringed. If the training dataset included millions of songs — which it almost certainly did — the potential liability is in the billions.
Suno and Udio's defense mirrors what we've seen in the image generation cases: they argue that training on copyrighted material is transformative and falls under fair use. The models don't store or reproduce the original songs — they learn patterns and generate new compositions. The outputs are original works, not copies.
Sound familiar? It should. It's the exact same argument OpenAI is making in the NYT case. Getty is making the opposite argument against Stability AI. The legal theory that underpins the entire generative AI industry — that training on copyrighted material is fair use — is being tested across every creative domain simultaneously.
## The Artists' Perspective
I've talked to musicians about this, and the emotional temperature is different from what you see in the AI-art debate. Visual artists are angry. Musicians are scared.
Here's why. A freelance illustrator losing work to Midjourney is losing individual commissions. A working musician losing work to Suno is potentially losing their entire career trajectory. Session musicians, jingle composers, background music producers, stock music creators — these are the people who make a living creating the kind of music that AI generates best: functional, professional, genre-appropriate music for specific use cases.
The stock music market — the industry that provides background music for YouTube videos, podcasts, corporate presentations, and ads — is worth roughly $1.5 billion annually. This is the market most immediately threatened by AI music. Why pay $50-200 for a stock music license when you can generate exactly what you need for a monthly subscription?
Production music library companies like Epidemic Sound, AudioJungle, and Artlist are already feeling the pressure. Some have responded by incorporating AI tools into their platforms. Others are positioning themselves as "human-created" alternatives, hoping that matters to buyers.
For top-tier artists — the Taylor Swifts and Drakes of the world — AI music isn't an existential threat. Their value comes from identity, performance, and cultural significance, not just audio production. Nobody's going to an AI-generated concert.
But for the vast middle class of musicians — the session players, the composers, the producers, the songwriters who work behind the scenes — AI represents a direct assault on their economic viability. These people don't have brands. They have skills. And their skills are being automated.
## Both Sides of the Argument
I want to be fair here, because both sides have legitimate points.
**The case for AI music tools:**
Creation is being democratized. People who've always heard music in their heads but couldn't play instruments can now bring those ideas to life. That's genuinely meaningful. Not everyone who wants to create music can afford lessons, instruments, and studio time.
The tools also unlock new creative possibilities for existing musicians. AI as a collaborator — generating ideas, suggesting arrangements, producing backing tracks — can enhance human creativity rather than replace it. Some artists are already using Suno and Udio as part of their creative process, treating AI outputs as starting points that get refined and personalized.
And there's a philosophical argument about what copyright protects. Style isn't copyrightable. You can't own a genre, a chord progression, or a production technique. If an AI generates music that sounds like jazz but doesn't reproduce any specific jazz recording, what exactly was infringed?
**The case against:**
The training data issue is real. These models were trained on copyrighted music. The labels have never licensed their catalogs for AI training. Saying "we learned patterns, not copies" is clever lawyering, but the patterns were learned from specific copyrighted works. The musicians who created those works were never asked, never compensated, and never given the choice to opt out.
The economic harm is also real. Every AI-generated track that replaces a licensed stock music purchase is money that doesn't flow to a human creator. The music industry has already been through the Napster era, the streaming era, and the playlist-ification era. Each transition reduced per-song revenue for artists. AI is the next reduction — and this time, the songs are being generated, not just distributed.
There's also the question of cultural value. Music created by AI doesn't come from lived experience, emotion, or artistic intention. It's statistically likely sound. That distinction might not matter for a corporate presentation's background music. It matters enormously for the art form itself. If we accept AI-generated music as equivalent to human-created music, we're making a statement about what music is and what it's for.
## Where the Industry Is Heading
The lawsuits will take years to resolve. In the meantime, the technology will keep improving and the market dynamics will keep shifting. Here's what I expect:
**Licensing deals will happen.** Just as the streaming era eventually produced licensing agreements between labels and Spotify, the AI era will produce licensing agreements between labels and AI music companies. The labels have the leverage — they control the training data that makes the products good. The question is the price, not whether deals happen.
**A two-tier market will emerge.** Premium content — music with named artists, emotional resonance, cultural significance — will remain human-created. Functional content — background music, stock music, jingles, ambient tracks — will increasingly be AI-generated. The middle ground is where the pain concentrates.
**"Human-created" becomes a marketing label.** The same way "organic" became a premium label in food, "human-created" or "AI-free" will become a premium label in music. Some platforms and labels are already positioning for this. Whether consumers care enough to pay extra is the open question.
**Copyright law gets rewritten.** The current copyright framework wasn't designed for AI. Fair use doctrine was meant to balance between human creators, not between humans and machines. Whether through court decisions or legislation, the legal framework will evolve. Probably slowly. Probably messily.
## My Take
I think AI music tools are extraordinary technology and a genuine threat to working musicians. Both things are true simultaneously.
The democratization argument is real but incomplete. Yes, everyone should be able to create music. But "everyone can create" doesn't have to mean "nobody gets paid." The goal should be a system where AI tools expand creative possibility while compensating the artists whose work made those tools possible.
That means licensing. It means revenue sharing. It means treating training data the same way we treat sampling — if you use someone's work to create something new, they deserve credit and compensation.
The music industry fought piracy with lawsuits and eventually won through licensing and streaming. They'll fight AI the same way. The lawsuits buy time. The licensing deals create the sustainable model.
In the meantime, a teenager in their bedroom can type "melancholic indie folk with fingerpicked guitar and female vocals about leaving a small town" and get a song that would've cost $5,000 in a studio. That's wild. It's also complicated. Welcome to the future. The royalty checks are in the mail. Probably.
Models9 min read
AI Is Eating the Music Industry — And Musicians Are Fighting Back
Suno and Udio can generate radio-quality songs from text prompts. UMG is suing both. An AI-generated song hit the German Top 50. The music industry is facing the same copyright battle that hit visual art — but with louder lawyers and angrier artists.


