Back to all posts
5/10/2025
Technology
4 min read
By Lunar Boom Music
Can You Tell if a Song Was Made by AI? Most People Can’t.

Can You Tell if a Song Was Made by AI? Most People Can’t.

Turns out, spotting an AI-made track isn’t as easy as you’d think. A new study from MIT Technology Review found that most people struggle to tell the difference between songs created by AI platforms like Suno and Udio, and tracks made by human musicians.

How AI music is actually made

Unlike humans who start with chords, drums, and melodies, AI music models work in reverse. They begin with pure noise, and then use prompts and training data to shape that into a finished waveform. It’s called a diffusion model, and it’s behind some of the most advanced AI music out there.

These models have been trained on millions of labeled clips, which helps them build completely new tracks from scratch based on just a few words.

Humans vs. AI: The test results

MIT reporter James O’Donnell ran an experiment: he generated 30-second AI tracks in 12 different genres using Udio, mixed them with real human-made songs, and had newsroom colleagues try to guess which was which.

The average score? Just 46%. That’s basically random guessing.

Instrumental genres were especially tricky—people consistently misidentified AI-generated classical piano, jazz, and pop. Even experienced musicians didn’t do much better: one composer got 50%, and a creativity researcher scored 66%.

What this means for music

The fact that people can't consistently detect AI music raises some big questions. If most listeners can’t tell the difference, where do we draw the line between “real” and “generated”? And should we even care—especially when some of these tracks are genuinely enjoyable?

While major record labels are currently suing both Suno and Udio over alleged copyright misuse, these platforms insist they have filters in place to prevent reproducing protected works.


Read the full breakdown over at MIT Technology Review.

AI musicUdioSunomusic blind testAI vs human musicdiffusion models