Earlier this year the platform pulled a track featuring AI-cloned voices of the performers Drake and The Weeknd.
Daniel Ek told the BBC there were valid uses of the tech in making music - but AI should not be used to impersonate human artists without their consent.
He said using AI in music was likely to be debated for “many, many years”.
Mr Ek, who rarely speaks to the media, said that he saw three “buckets” of AI use:
- tools such as auto-tune which improve music, which he believed were acceptable
- tools which mimic artists, which were not
- and a more contentious middle ground where music created by AI was clearly influenced by existing artists but did not directly impersonate them.
“Contentious” is putting it mildly …
Six months ago, no one knew what ChatGPT was, let alone how to use it to automate their entire life. Since then, an AI-infused world has gone from a theoretical dystopia to a depressing, ever-changing reality.
Although the basic tech supporting vocal deepfakes has been around for a few years, AI-generated music has finally gone mainstream. In the bright glare of the spotlight, it’s easier than ever to see how AI software could dramatically reshape the way music is conceived and recorded, providing new automated creative tools while threatening entire job categories—and that’s just in the short term.
In 2020, OpenAI put out a tool for creating songs in various artists’ styles, complete with vocals. Shawn Everett, Grammy-winning engineer and producer, experimented with that tool while working on a song by the Killers that has never been released. Everett recalls inputting a chord progression that frontman Brandon Flowers had written and instructing the AI to continue it in the style of Pink Floyd, with a certain emotional tenor, only to have the AI spit out unexpected melodies. “What was happening was so different, and was landing in locations that no human being would normally think of, but it still felt rooted in something familiar,” he says.
It’s all happening extremely fast. Peter Martin, an Oscar-nominated film producer, recalls how it would have taken 11 hours of a voice for an AI to be able to mimic it just three years ago. “That is now down to less than two minutes,” he says. Tools for creating AI music are becoming less labor-intensive too. Generating a fake Drake song might involve four or five different AI tools now, Martin explains, including a lyric generator, a beat generator, a melody generator, vocal cloning, and vocal synthesis. But he’s beginning to see one-stop shops like Uberduck. Plus, AI tools that can emulate hundreds of vintage synths—or combine them into previously unheard sounds—are already commercially available. “AI can create an entirely new rhythm that maybe a human couldn’t have executed,” Martin says. “You can create an audio version of a visual concept: What does a tree sound like?”
Mind-bending …