robot playing piano

“Does generative AI make our most treasured art less human? No. Nothing could be more human than using tools to create music. New technology makes new ideas possible, and new ideas are what make us human.”

Those are the words of composer Lucas Cantor Santiago, speaking at this month’s Ivors Academy Global Creators Summit.

If his name is familiar, it might be from his 2019 project that used AI to attempt to complete Schubert’s famously-unfinished Symphony No. 8.

That was a partnership with Chinese tech firm Huawei, but he has continued to follow and experiment with musical AIs in the years since.

The conference was organised to give songwriters and composers a voice in the music industry’s current debates about AI. Cantor Santiago harked back to comments in a previous session by industry lawyer Cliff Fluet when outlining his views.

“What I find exciting about artificial intelligence is that it’s a hammer. It’ s a tool. If you look at it and see something you can bludgeon your neighbour with, that’s on you,” he said.

“If you look at it and see something you can build with, that’s also on you. And maybe I want to hang out with you a little bit more!”

Cantor Santiago also talked about the importance of human creativity in music created using AI technologies. For the Schubert project, for example, “it could generate melodies but it couldn’t really put them in a context that made any sense”.

“I finished Schubert’s unfinished symphony using artificial intelligence. Which is different from artificial intelligence finishing Schubert’s unfinished symphony. I don’t think there’s going to be a day when you press a button on your phone, it prints out a symphony, and an orchestra plays it.”

While he has enjoyed using the tools that exist, Cantor Santiago also suggested that AI companies focusing on compositional aids may be missing a trick, when they could be focusing on helping musicians with the other tasks in their working process.

“There’s one company that sends me emails all the time: ‘If you don’t want to come up with a melody for one of your projects, we’ll help you’. That’s the part of my job I like! That’s the fun part of my job…”

Lucas Cantor Santiago

More musicians had their say in a panel session that concluded the Global Creators Summit moderated by Ivors Academy chair Tom Gray.

He was joined by artist, producer and creative director Ilā Kamalagharan (who performs and releases music as ILĀ); composer Jesper Hansen (who is also vice-president of the European Composer & Songwriter Alliance); and Loop Legal partner Lulu Pantin to discuss how AI is changing the way creators work.

Gray started by asking the panel what excites and terrifies them about AI.

“I’m not terrified that AI will create music that is better than humans can, but I’m a bit worried about the trend we are seeing with the younger generations,” said Hansen. “I’m worried that AI will end up producing music that is good enough for that generation, and that will set the benchmark for the music they consume.”

However, ILĀ offered an encouraging view, noting that on TikTok they have seen “a proliferation of really exceptional musicians of that generation. I don’t think it’s dumbing down people’s creativity. I see young musicians who are frighteningly good.”

ILĀ talked about their excitement at the use of AI technologies for assistive purposes: for example for people who are paralysed, or living with conditions like Alzheimer’s or dementia.

“And also for what I would call the democratisation of creativity, where people can interact and create [music] on a simple level very easily, and the benefits of that for mental health.”

On the less positive side, ILĀ suggested that creating music using AI can lead to choice paralysis – “generally when I use it, it takes me longer to create compositions than when I don’t use it” – as well as “the danger of becoming self-referential”.

They also suggested that there is a question to be answered around how the payouts from music created with the help of AI are shared.

“How are AI music companies compensated for their work? Do we think they should get a share of the writing [royalties] if an assistive AI has been used? My view on that is no, absolutely not. But that’s something that is brewing, yet is not talked about.”

‘Nuance is incredibly important in art’

Pantin said that the exciting thing about musical AIs is what human musicians might do with them.

“I can’t wait for Thom Yorke to start using AI. As a music fan I’m just excited to see what might come of it,” she said.

“As a lawyer, I’m very excited about the traceability that AI can potentially provide, if we have good actors and good reporting… We can look and it will say ‘you inputted XYZ…’ as opposed to these problematic arguments about ‘vibe’ and how much Marvin Gaye influenced another musician.”

Pantin agreed with ILĀ’s point about AI potentially lowering the barriers of entry for people who are disabled, visually impaired or physically unable to play instruments.

“What terrifies me? I’m terrified at the sheer volume of music that can be created. That’s already an issue we’re experiencing [with streaming] and it makes us divide up the very meagre pie in even more slices.”

Pantin also said that she worries about “the lack of nuance” in current AI models. “Nuance is incredibly important in art, and they don’t have that discretion. Hopefully with human guidance we can push back on that.”

She also talked about her fears for musicians’ legacies. “If someone can replicate your voice and make a near-perfect copy of it, and you’re suddenly singing with people or about ideas that you would never otherwise have put your mark on, that leads to a very troubling reality for all of us.”

ILĀ raised another issue: the question of how diverse the inputs that are going into AI-music models are, and what that will mean for their output. It’s something they realised when they first started using generative AI tools to make a music video.

“I found that if I put in a text prompt, everybody came out white! I had to specifically say ‘give me brown-skinned ballet dancers moving in this way’ and even then it wouldn’t. That kind of data bias is going to be there. It’s already there. And the deeper it gets the worse the situation is,” they continued.

“Most [AI music] companies train their machine-learning systems largely using compositions by men of a particular era of classical music, which is very narrow. What I would like is for it to give me something that inspires a different direction, but what it’s giving me is plain boiled potatoes with no sauce. I really want the curry!”

The conversation turned to issues of compensation, transparency and consent – the three key lobbying points that the music industry is coalescing around in how it would like to see creative AI technologies regulated.

Hansen said he is strongly opposed to ‘opt-out’ systems (where musicians have to explicitly opt out of their work being used to train AI models) and wants ‘opt in’ systems (where AI companies can only train using material that they have licensed) to be the standard instead.

“The idea that opt-out came before opt-in is a big disaster from my point of view,” he said. “We would be fooling ourselves if we believed that the big tech companies haven’t already scraped everything that there is to scrape.”

Hansen told the tale of going to the collecting society that represents him and telling it he would like to opt out of AI training deals. “I got an email back cc’ing 11 employees saying ‘we don’t know actually how that works’,” he said. “Transparency is a big issue.”

Gray was in agreement on distrust of how large tech companies might deal with the concerns of musicians. “My distrust of AI is a mirror of my distrust of market capitalism,” he said. “It’s all about concentrations of money and power. That’s where my distrust of AI sits. Is it a thing that’s going to bring people up, or bring people down?”

Jesper Hansen, ILĀ, Lulu Pantin and Tom Gray

Pantin offered another elephant-in-the-room suggestion, meanwhile.

“Another issue that should be discussed more but which is a bit uncomfortable is a tension between the interests of creatives and the interests of rightsholders,” she said.

“Especially in the US, the entire concept of copyright law is really hinged on the commercialisation and profiting off of creatives, not so much creatives being able to benefit from the value of their own work.”

She noted that in some of this year’s AI controversies, such as the deepfake track using cloned voices of Drake and The Weeknd, the rightsholder (UMG in that case) has been acting on behalf of the artists by issuing takedowns. “But they [rightsholders and musicians] are not always in the same boat.”

The panel returned to the theme of AI as assistive tools for composition, with Hansen saying he is open to the idea, yet does not feel pressure to adopt it.

“I’m curious, but I’m also well aware that I’ve composed my best music sitting at the piano, to be honest,” he said. “I’m way too comfortable – and too old! – writing music in the way that I do. And my clients seem to like that. I just like sitting at a piano writing themes… But you would be lying to yourself as a composer today if you were not following along [developments with AI technologies].|

ILĀ said that they are already using AI in “10 or so different ways” in their various projects, and agreed that it’s important for musicians to think about what this technology means, even if they aren’t using it themselves.

“How do we want it to be used in a positive way? And can it help us be more human as composers, by freeing us up from having to do all the boring stuff?”

Gray got a big laugh from the audience with his response on one use he’s already found for automation in his work.

“It’s late at night, I’ve got to get an ad brief done in the next hour and a half. Logic Pro Drummer. My god! I don’t know how many times I’ve put that on a piece of music…”

Pantin, meanwhile, suggested that voice-cloning could have a positive use for songwriters when they’re pitching songs to artists. “Here’s what it would sound like with your voice on,” she said, getting another big laugh when she added that this could address a common sticking point experienced in songwriter pitches: “The lack of imagination of A&Rs!”

She also talked about the likely negotiations ahead between artists and songwriters (or their representatives) and music rightsholders over use of their work to train AIs, when they sign new contracts – even when those contracts grant the rights to use their work in all media.

“There are carve-outs we can start fighting for, although we will get pushback,” she said. “If you are a Taylor Swift, they can start to make changes, but that will be a long process.”

‘There is always going to be a line in the sand for creators’

Pantin talked about working on legacy contracts for artists like James Taylor and Jimi Hendrix, where digital wasn’t even contemplated when they originally signed the deals. This became an issue in recent years with arguments over whether streaming should be counted as a sale or a license – the latter giving a much higher royalty to the artists.

“We got to see the internal memos that they [major labels] had to reveal during litigation… they literally say ‘we need to come to a decision and have a policy about how these are categorised, so which one do we choose?’ And they said ‘Which one costs us less?'” said Pantin.

She suggested that when it comes to future uses of music in AI deals, similar dynamics may apply. “They are going to choose whichever is the least cumbersome,” said Pantin. “That means not getting consent, hint hint!”

Pantin said that she thinks this will ultimately be a job for legislators to deal with – “statutes can make certain contractual provisions unenforceable… rather than leaving it to the artists’ and writers’ attorneys to enact these fundamental differences in the way contracts are done… Congress needs to act.”

“In summary: regulate, goddammit!” quipped Gray, who has played a leading role in the movement to urge regulation of the streaming economy in the UK.

This sparked a discussion with the audience, including a publisher who warned against rightsholders being “put into the same box” as being unlikely to act in the best interests of their musicians.

Their point was that their company “has very close relationships indeed with our writers: we strike fair agreements with them, and if they weren’t fair when we started as a company, we put it right”.

They came back to another point previously made at the summit, which was that internal tensions within the music industry might weaken its chances of success when lobbying for regulation.

Gray took the point on board. “It’s important that AI has the capacity to not only redraw the rights landscape, but the allegiances and the way we approach all these problems politically,” he said. “We may end up banding together in completely different formations than we’ve ever typically had before.”

“There is always going to be a line in the sand for creators in terms of the use of us as people and the use of our work. And it has to be very clear. It’s human rights, really,” he added.

“It’s not even a balance between copyright and money and finance and power. It’s really to do with government and people. We’ve all got to stick together and make sure that in a fair, democratic country, the rights of people are protected.”

EarPods and phone

Tools: platforms to help you reach new audiences

Tools: Kaiber

In the year or so since its launch, AI startup Kaiber has been making waves,…

Read all Tools >>

Music Ally's Head of Insight