Skip to main content

Here’s How AI Can Predict Hit Songs With Frightening Accuracy

New AI technology predicts hit songs—by listening to someone’s body. 

Music notes emerge from a human head.

Jeffery DelViscio 

Illustration of a Bohr atom model spinning around the words Science Quickly with various science and medicine related icons around the text

Sophie Bushwick:  Last month, AI researchers claimed an impressive breakthrough. They published a paper showing that AI can predict, with 97 percent accuracy, if any song will be a hit. And it does this by measuring how the listener’s body responds to the music. 

Lucy Tu: But it might be too soon to anoint AI as the next big talent scout for the music industry. I’m Lucy Tu, the 2023 AAAS Mass Media fellow for Scientific American.

Sophie Bushwick: I’m Sophie Bushwick, tech editor at Scientific American. You’re listening to Tech, Quickly, the all-things-tech part of Scientific American’s Science, Quickly podcast. 


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


[Intro music]

Bushwick:  I thought the music industry has been using AI to create songs and analyze them for a while. So what’s so special about this new approach?

Tu: Great question. Streaming services and music industry companies have been relying heavily already on algorithms to try and predict hit songs. But they've focused primarily on characteristics like a song artist and genre, as well as the music itself. So—aspects like the lyrics or the tempo. But even with all of that data, the existing AI algorithms have only been able to correctly predict whether a song will be a hit or not less than 50 percent of the time. So you're honestly better off flipping a coin. 

Bushwick: Yeah, very random choice odds. 

Tu: And so this new approach, it's different for a few reasons. One being it's near perfect accuracy, a 97 percent success rate is much, much higher than any approach we've seen before. And it's also unique because the study claims to train its AI on the brain data of listeners rather than a song's intrinsic features like it's DanceAbility or it's explicitness.

Bushwick: That sounds like science fiction, just like it’s AI reading your mind to predict if you like the song, but I can’t help but notice that it claims to use brain data. So what do you mean by that? 

Tu: Yeah, great catch! So at face value, the researchers in this recent study, say they measured listeners' neurophysiological response to different songs. And whether intentionally or not a lot of popular news outlets sort of picked up on the neuro part of neuro physiological response. And assume that meant the researchers directly tracked brain activity through an fMRI scan, or EEG recording, which they didn't, 

Bushwick: What did they use?

Tu: So what they did was they had these listeners, while they were listening to songs were a wearable device, sort of like an Apple Watch, or a Fitbit, something that can track your cardiac activity. So your your heart rate, for instance. And they collected this cardiac data, and use it as a proxy for brain activity by putting it through this commercial platform immersion neuroscience, which claims to be able to measure emotional resonance and attention by using cardiac data.

Bushwick: So they're essentially they're taking your heart rate and your blood flow, and then they're translating it into a measure that they say indicates what's going on in your brain.

Tu: Exactly. And this measure of what's going on in your brain is called immersion. I talk to some researchers who were a little bit skeptical about the use of cardiac data as a proxy for neural response, especially because this measure of immersion that the researchers talk about, hasn't really been discussed in by any other researchers in peer reviewed publication.

Bushwick: So it's been studied by the people who work at the company that uses it, but not really anyone outside it.

Tu: Exactly

Bushwick: Gotcha. 

Tu: And I will say also that the lead author of this most recent study, he has some financial ties to the commercial platform that was used in Mercer neuroscience. He's the co-founder of the company, and then also its chief immersion officer, which is another concern that some of the researchers I talked to raised.

Bushwick: So if immersion is such a controversial measure, then why don't the scientists just stick someone into an MRI machine that would actually scan their brains because this has been done before. In 2011 researchers from Emory University put teenagers through an MRI machine to see how their brains reacted to music. And they did make somewhat accurate predictions of a song sales based on these brain scans. So why are the researchers in this study choosing it to do it with this other measure that hasn't been proved in the same way?

Tu: I think the key here is the wearable device component that I talked about earlier. So that study that you mentioned, like you said, they put teenagers through an MRI machine. Well fMRI machines, they take a long time, 45 minutes to an hour just to get one scan of the brain. And also, pupils can be claustrophobic. It's not comfortable to sit in an fMRI for an hour and listen to music. 

Bushwick: It’s a long time.

Tu: Yeah, a long time to be confined in this cold chamber. I mean, you think that maybe it would influence the way you know, people listen to music if they're stuck in this cold space for for that extended period, it's also just impractical to put a bunch of people through an fMRI just to get a few brain scans, and then use that to train an AI algorithm to predict hit songs. So this study, what its value out is, is that participants use a wearable device, something easily accessible, something that can be super cheap. A lot of people already own wearable devices, like the ones used in this study,

Bushwick: I’m wearing one, yup.

Tu: Me too!

Tu: Um, so the idea is that if we can actually predict hit songs, just with the data that's given to us by a wearable device, like the heart rate, like the blood flow, we might be able to widely click data. So people have personalized music, movie, etc, or recommendations. It's just a lot more accessible than the traditional brain scan approaches that have been done before.

Bushwick: But see, that actually does freak me out a little bit. Because music platforms like Spotify, they're already collecting a lot of personal information about their users. So what would it mean for them to also be eavesdropping on your heart rate and your breathing rate? I mean, almost as if they're trying to read your mind.

Tu: It's kind of discomforting, honestly, don't get me wrong, I would love in some ways, if my streaming services just automatically knew somehow what I wanted to listen to in that moment, you know, when I'm sad, they give me a playlist for heartbreak songs. And when I'm really happy, or, you know, in the car with friends that give me that carpool karaoke playlist. I love that on one hand, but the idea that they're giving me these recommendations based on literally reading my mind is it raises a lot of ethical questions, which is something that also came up in quite a few of the conversations I had, with some researchers and experts in data privacy. I think one big question that I actually raised with the lead author of this study was, well, how do you actually envision this service being used? And he said, of course, we would go through the unnecessary data privacy channels, this would be an opt in service. So only people who explicitly say I accept Spotify reading my mind would have their minds read. And then I talked to another data privacy expert who countered and said, Well, how many of us actually read the terms and conditions before we accept it? I don't know. Absolutely not.

Bushwick: Am I going to scroll through hundreds of pages of permissions? No, I usually just click OK. 

Tu: And that's what I'm saying. I think that these terms and conditions could tell me I'm signing away the rights of my firstborn child. 

[laughter]

Tu: So the data privacy expert I spoke to said that that's a huge consideration. We have to think of not just when we're implementing this technology, but when we're developing it. And so we have to think about questions of what this would mean in terms of educating consumers if we were to actually make this technology more accessible these AI algorithms.

Bushwick: So before we even start worrying about reading the terms and conditions and having our Fitbit spy on us and predict what songs you want to listen to, is this even ready? Is the technology even ready for that yet? Are there other steps that we would have to go through before it's ready to roll out and larger than just a study sample size?

Tu: Absolutely. So one big limitation of this study is that it used a pretty small sample of I think, less than 30 people. The study does claim that even that small sample size is enough for them to do this process they call neuro forecasting, which is taking a small sample of data, a small pool of people and using the data from that small pool to make predictions about a much wider audience a much wider market. Not everyone's fully convinced. Researchers who said they would love to see the findings from this study replicated not only to first confirm the validity of that, that measure, we talked about earlier immersion, the validity of using cardiac data as a proxy for brain activity. This pool of 30 was recruited through a university so they had a lot of younger listeners, my music preferences, and my mother's music preferences are very, very different. I'm sure the authors even themselves note that they didn't have a lot of racial and ethnic diversity. So they might not have captured the cultural nuances for instance, that might go into music preferences. So some other researchers I spoke to said they would love to see the findings from this study replicated with larger samples, perhaps more diverse samples, so they can verify that the preferences used in this study to predict hit songs are actually replicable with other groups that might have entirely different preferences when it comes to music and song listening.

Bushwick: Science, Quickly is produced by Jeff DelViscio, Tulika Bose, Kelso Harper and Carin Leong. Our show is edited by Elah Feder and Alexa Lim. Our theme music was composed by Dominic Smith.

Tu:  Don’t forget to subscribe to Science, Quickly wherever you get your podcasts. For more in-depth science news and features, go to ScientificAmerican.com. And if you like the show, give us a rating or review!

Bushwick:  For Scientific American’s Science, Quickly, I’m Sophie Bushwick. 

Tu:  I’m Lucy Tu. See you next time!

Sophie Bushwick is tech editor at Scientific American. She runs the daily technology news coverage for the website, writes about everything from artificial intelligence to jumping robots for both digital and print publication, records YouTube and TikTok videos and hosts the podcast Tech, Quickly. Bushwick also makes frequent appearances on radio shows such as Science Friday and television networks, including CBS, MSNBC and National Geographic. She has more than a decade of experience as a science journalist based in New York City and previously worked at outlets such as Popular Science,Discover and Gizmodo. Follow Bushwick on X (formerly Twitter) @sophiebushwick

More by Sophie Bushwick

Lucy Tu is a freelance writer and a Rhodes Scholar studying reproductive medicine and law. She was a 2023 AAAS Mass Media Fellow at Scientific American.

More by Lucy Tu
Here's How AI Can Predict Hit Songs With Frightening Accuracy