There’s an argument that in 2023, streaming services aren’t just competing to be loved by listeners, but also by labels.

One major label in particular, Universal Music Group, and its campaign to push towards ‘artist-centric’ payouts. While its conversations with the biggest DSPs remain private, mid-tier services Deezer and Tidal were swift to sign on as partners for UMG’s drive.

Yesterday, Deezer made an announcement that fits neatly with the major label’s goals. It has promised to detect AI-generated content on its service and “develop a system for tagging music that has been created by generative AI, starting with songs using synthetic voices of existing artists”.

The company said that initially, its goal with this is to ensure artists, labels and listeners alike know “what’s ‘real’ or AI-generated on the platform”, as well as reducing fraud. However, Deezer added that its longer-term goal is to “develop a remuneration model that distinguishes between different types of music creation”.

Details on how Deezer plans to reliably identify AI-generated music are scarce in the announcement, beyond a reference to its Radar content-identification technology. Of course, it makes sense to be as publicly vague as possible about any system being used to battle fraudsters, since the more specific you are, the more information they’ll have to counter it.

We have some other questions, starting with this idea of a clear and easily-identifiable demarcation between ‘real’ and ‘AI-generated’ music. It’s inaccurate now, and will only become more so as more assistive AI tools emerge, and as more (real) artists use them for their music or collaborate with AI music startups.

Even something that sounds simple, like “songs using synthetic voices of existing artists”, is not. Cloning an artist’s voice and using it to create and release tracks without their permission is, as we’ve written before, a clear act of bad faith – even if the specific legalities of it are still being debated.

Yet some artists – Holly Herndon, Grimes – are already cloning their own voices for other artists to make music with, backed by proper licensing agreements.

As more artists do that, Deezer and its DSP peers will need to differentiate between licensed and unlicensed voice clones, which is surely less about whizzy identification tech, and more about responding to takedown requests from artists and rightsholders.

Finally, Deezer’s goal of “a remuneration model that distinguishes between different types of music creation” certainly chimes with UMG’s ambitions, but raises its own questions and concerns.

How much human input is required for a track to be judged ‘real’ (and so get the highest tier of royalty) and will any system built by a DSP really be able to measure this? It seems a stretch, and that’s before we get mired in philosophical questions.

Is a bog-standard EDM banger or a generic piano instrumental made entirely by a human musician intrinsically more valuable than equivalent tracks made by (or with) an AI, even if the latter are better and/or more popular? And yes, ‘better’ is a subjective can of worms all by itself…

The current dominant streaming model values music purely in terms of consumption: the more streams it gets, the more valuable it is, and the more money it makes. If we’re going to develop a new model that “distinguishes between different types of music creation”, the questions above can’t be dodged.

It’s good that Deezer is preparing to grapple with them, but the company and its peers can’t take these challenges lightly.

EarPods and phone

Tools: platforms to help you reach new audiences

Tools :: Wyng

Through Music Ally’s internal marketing campaign tracking, we’ve recently discovered an interesting website by the…

Read all Tools >>

Music Ally's Head of Insight