Much of the narrative around ‘deepfaked’ tracks that clone the voices of famous artists is negative, focusing on rights being infringed and fans being fooled.
However, as the nascent industry around licensed voice clones (Holly Herndon, Grimes etc) shows, there is potential for these deepfakes to be properly-licensed, legitimate music. Nobody’s being fooled, and the original artist has given permission and is getting paid.
Is it just independent artists exploring the latter, more positive side of voice-cloning? Seemingly not. The Financial Times reported yesterday that Universal Music Group is in talks with Google to “license artists’ melodies and voices for songs generated by artificial intelligence”.
The report also claimed that Warner Music Group is engaged in similar discussions with Google, and added that YouTube Music boss Lyor Cohen has been working on the project for Google. Nobody is commenting on the record for now.
The FT made it clear that these are early talks with no product launch, and that artists would be able to opt out if they didn’t want to be part of any such agreement. “The goal is to develop a tool for fans to create these tracks legitimately, and pay the owners of the copyrights for it.”
It would be quite the turnaround for Google: previously the long-time target for the music industry’s wrath around topics from piracy links on search engines to user-generated uploads on YouTube, it could now become the first major tech ally for rightsholders desire to bring regulation and licensing to the deepfake / voice clone space.
(Obligatory reminder here that deepfaked tracks are just one part of what ‘AI music’ is about: there are plenty of other applications for generative AI in music, even though a lot of recent media coverage has focused on clonetracks.)
Earlier this year, Google unveiled its latest AI music model, MusicLM, and made it available for people to play with. The company also has a long history of releasing playful AI-music experiments, with the latest being Viola the Bird in July 2023.
Deals with Google for UMG and WMG could set the parameters for future agreements with other firms in the AI space. That does pose some questions about how much agency that will provide smaller startups and independent labels alike, with the initial agreements hammered out between the biggest players in the respective industries.
In any case, this dealmaking (Meta and OpenAI will surely be candidates for similar talks) will sit alongside the music industry’s lobbying for regulation of training, transparency and other aspects of generative AI.
That lobbying is being focused into the Human Artistry Campaign, which was announced in March with a plethora of music industry bodies as founder members and a trademark filed by Universal Music Group indicating its role in the background.
This week, that campaign got some support from a new quarter: BandLab, which says it’s the first music-creation platform to back the campaign. CEO Meng Ru Kuok announced the news in a presentation at the Ai4 conference. BandLab isn’t joining the campaign, but voiced its support for it.
BandLab launched its first AI-music feature in March 2022. ‘SongStarter’ was built using Google’s TensorFlow tech, and generates beats, melodies and chord changes for musicians to work with. The company has now seen its opportunity to present itself as a proponent of “ethical, human-first AI practices”.
The Human Artistry Campaign will definitely benefit from more members from the technology world: the developers of musical AIs sitting alongside music industry bodies to plot the best path forward.
The major labels’ talks with Google fall into the same boat: collaboration not just to crack down on bad actors and tackle infringement, but to sketch out new ways of making, sharing and consuming music that deepen it rather than cheapen it. And with artists taking a central role in shaping it all.