There are new moves afoot on either side of the Atlantic to avoid some of the negative impacts that creative AI technologies might have on musicians.
In the US, this week sees the introduction of a revised version of the Protect Working Musicians Act. We wrote about the original version of this bill in 2021, which focused on codifying a right for “working artists and independent musicians to band together to negotiate with dominant streaming platforms”.
The politician who introduced that, Representative Ted Deutch, has since left Congress. Now his bill has been reintroduced by Representative Deborah Ross, working with the Artists Rights Alliance and A2IM on the details.
Those details have now expanded to include AI: the quote above still applies, but with “and artificial intelligence developers” added at the end.
As before, the bill would also make it clear that “antitrust laws are no obstacle to these negotiations”, and would enshrine the right of these musicians to “collectively refuse to license their music to a dominant online music distribution platform that refuses to pay market value rates”.
In the UK, meanwhile, today’s development comes from the Council of Music Makers, which is the umbrella body for The Ivors Academy, the Featured Artists Coalition, the Musicians’ Union, the Music Producers Guild and the Music Managers Forum.
It has published ‘five fundamentals for music AI‘ ahead of their official unveiling later at the Ivors Academy’s global summit on music and AI in London today. The principles boil down to consent, respect and remuneration.
More specifically, that means individual musicians (we’re using that as shorthand: this covers artists, songwriters and producers alike) consenting to any use of their work to train AI models, rather than having that consent “inferred by rights-holders or technology companies”.
We wrote last week about the template letter that the Council has made available for musicians to make this clear to their rightsholders. It’s also available here.
Also that their “publicity, personality and personal data rights” should be respected – and the Council would like to see those rights clarified and strengthened by the British government to back this up.
The Council also wants musicians to “share fairly in the financial rewards of music AI” – including from the music output of AIs trained on their work – and to have a say in how the licensing models and revenue sharing from AI music evolve.
Finally, there’s a call for transparency: for AI-generated works to be clearly labelled; for AI companies to explain what music they’ve trained their models on with “complete records of datasets” to prove it; and for rightsholders to be transparent about the terms of (and works included in) their licensing deals with AI companies.
A common narrative in this year’s debates about AI and music has been one of rightsholders defending musicians’ interests, with AI companies in the role of potential villains. There’s truth in this: labels and publishers are most likely to have the resources to hold AI firms to account over training, transparency and remuneration, as well as to take action against those who fall short.
However – and we’re going to be as diplomatic as possible here – the streaming era has surfaced tensions around the question of whether rightsholders deals with tech companies always serve the best interests of the musicians they represent.
With that context, you can see the Protect Working Musicians Act and the Council of Music Makers’ AI fundamentals in a similar light: efforts to ensure that artists, songwriters and producers have more agency in future deals struck with AI companies. And, indeed, to hold their rightsholders to account, as well as the AI firms.