The most talked-about song of the month so far hasn’t been the Foo Fighters’ new single or anything from Taylor Swift but a track that featured the vocals of both Drake and The Weeknd while simultaneously featuring nothing from either artist.
If you’re confused, just wait.
Music generated by artificial intelligence is coming to the radio sooner than you think
Heart on My Sleeve was a creation of someone named Ghostwriter977 who used artificial intelligence to mimic the vocal styles of both performers. The result was a brand new song born out of software that really does sound like Drake and The Weeknd spent the weekend in a studio together.
Ghostwriter977 posted the track on all the streaming music services (Spotify, Apple Music, YouTube, Amazon, SoundCloud, Tidal, and Deezer) and saw it played and viewed hundreds of thousands of times. A TikTok post was streamed 15 million times. One Twitter post received 20 million clicks.
Universal Music, the home label of both Drake and The Weeknd, complained very loudly and Heart on My Sleeve was taken down. But because the internet is forever, you can find it with a quick Google search. Universal condemned the practice of using real songs by real artists to train AI to create new and different songs, calling this “both a breach of our agreements and a violation of copyright law.” The label also took a shot at streaming music services, saying that they have “legal and ethical responsibility to prevent the use of their services in ways that harm artists.”
Historian reimagines Canadian politicians as glam rockers using artificial intelligence
There is so much to unpack with this situation.
Drake and The Weeknd aren’t the only acts to find fake versions of themselves online. Rihanna, Jay-Z, Kanye West, Ariana Grande, and Eminem are just three other artists who have been cloned using generative AI. Imagine being sent a link to a song where that’s you but not you. Would you feel violated and ripped off?
In the case of a Lady Gaga/Lana Del Rey creation, both singers gave their blessing and apparently love the results. It’s unclear if they’re being compensated.
The legal questions surrounding these fake songs have been bubbling up for the last couple of years.
First, copyright law is murky when it comes to these new AI songs. Neither Drake nor The Weeknd wrote or sang the song. They had nothing to do with it and have no compositional or performance claim. Everything about Heart on My Sleeve was created by a machine without any input from them. The final product just happens to sound eerily familiar. It’s also disturbing that the topic of the song is Selena Gomez, who once dated The Weeknd. The voices on the song trade verses about her.
Since many territories (including Canada) only consider music created by a human copyrightable, the legal implications are unclear. There are no laws specific to AI creations (although the EU is working on it). As the person who programmed the AI, does Ghostwriter977 have a claim to ownership of the song? It’s possible, depending on how you interpret the laws. For example, you might argue that Heart on My Sleeve was a collaboration between human and machine. Then there’s something called “transformative parody” is recognized as legal. But what exactly is meant by “transformative?” The legal system has yet to be tested on that.
Another issue has to do with image and likeness. Up until now, third parties have had to be very careful when appropriating characteristics of someone for uses for which permission has not been granted. Imagine how you would feel if you stumbled on some virtual version of yourself, one that looked and sounded like you but was doing things you’d never do and saying things that you’d never say? And if your online clone says something libellous or defamatory, who will answer for that?
Other artists such as Brian Eno, Peter Gabriel, and David Guetta are bullish about the prospects of AI being used for the creative good. Guetta has gone so far as to say the future of music is in AI. And they might be correct if certain precedent is considered.
Back in the very late 1970s and early ’80s, technology made it possible for artists to deconstruct songs into samples which were then reassembled into new songs. Sampling became an essential creation tool and after a period of legal ambiguity is now an accepted part of music composition. If you follow the rules and procedures and pay for use of the sample, no problem.
Sampling gave birth to something called an “interpolation” where an older song is incorporated into the foundation of something brand new. An excellent example of an interpolation is the use of Rick James’ 1981 song Superfreak. It was recycled in a crafty way for MC Hammer’s U Can’t Touch This (1990) and more recently for Super Freaky Girl by Nicki Minaj (2022). The rightsholders of Superfreak (which is now the Hipgnosis Song Fund) receive revenues from those interpolations.
Samples and interpolations are settled law. Could the same happen with AI? I’d bet on it.
Say you have a particularly pleasant singing or speaking voice. You may soon be able to license your voiceprint to a company that would then use it to do voiceovers and narrations. Such companies already exist. This leads to the possibility of Morgan Freeman narrating documentaries about penguins for the next 100 years.
Or we could see more projects like this. A British band called Breezer was tired of all the promises of an Oasis reunion so they enlisted AI to create the next best thing. They figured out how to create something akin to what the real Oasis sounded like between 1994 and 1996 and the results are excellent. Even Liam Gallagher approves: “It’s better than all the other snizzle out there. I sound mega.”
PSAC strike: Feds say ‘final’ offer to union has ‘enhanced’ wage package
Edmonton Oilers eliminate L.A. Kings with 5-4 win in Game 6
You just know someone is working on a Beatles reunion album right now. Good or bad? Well, if licensing AI projects can be worked out, there could be a lot of money to be made, just like artists are making money from samples and interpolations.
But let’s get slightly dystopian again. If anyone can use AI to write a song, it’s likely in the near future that streaming music services will be flooded with new tracks written by machines with the help of a human programmer. The number of songs in the streamers’ library will jump from the current 100 million-ish to something perhaps exponentially much, much higher, making it harder for everyone to rise above the noise.
Still, some of these songs will become hits. What then? Will record labels establish AI departments for the expressed purpose of creating and promoting artificial stars? You bet. Revenue-generating music without having to deal with pesky musicians and all the overhead that goes along with them. This will give birth to a new generation of creatives who make music without learning how to play a single note. Again, good or bad? We’ll see.
The robots are coming to take over the airwaves. Next target — the trusted radio announcer
It’s getting easier to train AI, too. You feed it music — say, from Spotify — and the program analyzes the millions of data points within the song. From there, it can synthesize something new incorporating the characteristics of what it heard. Google even has a new AI that will write a song for you based on instructions given through nothing more than text. Very soon, we’ll be able to type in something like “Write a song that riffs like Metallica but has a vocal like Madonna.” Boom. Done.
This goes far beyond just music, too. Any kind of creative work can be used to train an AI from images to writing. There’s a project called Have I Been Trained that allows users to figure out of someone has used their copyrighted work for AI purposes without permission. Expect more of this type of policing.
Google CEO Sundar Pachai appeared on 60 Minutes earlier this month. He believes that AI will eventually be as important to the human race as fire and electricity. He also believes that the time to start creating rules and laws is now. Companies, organizations, and governments must come together to ensure that the amount of evil done using AI is kept to a minimum.
Why some tech, AI leaders are calling for a 6-month pause on artificial intelligence development
Meanwhile, AI will only become more sophisticated and will take the place of more and more humans. Who will be the first to experience disruption? Artists. Certain types of writers. People in knowledge industries. Creative types. Of course, new jobs will arise as a result of AI. Hey, there are already openings for a new gig called “AI prompter.”
If you were around in the mid-90s when this new thing called the “internet” was starting to become popular, you might remember thinking “This is cool. It’s going to change a lot of things.” But even the wildest imaginations could not have predicted how the internet has managed to reshape humanity in such a short period of time.
I’ve got the same feeling about AI. It could actually open up new frontiers in music. But for everything else, I’m not nearly as optimistic.
Alan Cross is a broadcaster with Q107 and 102.1 the Edge and a commentator for Global News.
Subscribe to Alan’s Ongoing History of New Music Podcast now on Apple Podcast or Google Play