musical notes on brown wooden table

Last week’s Ivors Academy Global Creators Summit was a conference entirely focused on what AI technologies mean for songwriters, composers and the wider music industry.

We’ve posted a couple of reports from its sessions already: PRS for Music’s survey of its members about AI, and the Human Artistry Campaign’s Ted Kalo talking about regulatory principles.

Here’s our third piece, focusing on the first couple of sessions at the London event.

“We are used to technology causing profound disruption. We have migrated from physical to download and then through to streaming in not much more than a decade,” said composer and PRS for Music Members Council chair Julian Nott in his introduction to the day.

“Artificial intelligence somehow feels different: it’s something even more disruptive than the demise of physical. It’s not solely about the delivery of music to our consumers,” he continued.

“It raises much more fundamental questions about what we understand as music creativity itself… and about what music deserves value and what music doesn’t. Both creative and financial value.”

Nott warned that music creators need to work fast to reach a consensus on how they approach AI, and what kind of rules and laws “that respect creative works and their IP” they would like to see applied to it.

However, he also warned the audience to think about these technologies’ potential positive impacts for their working processes.

“There’s also a danger that we become overly focused on the challenges and ignore the possibilities that AI offers,” he said.

“Many of us are actively embracing the vast range of AI tools, from stem splitting to ideation to mixing and mastering. They do not replace the necessary skill of being a musician or writer. They merely simplify the process, freeing up more time to create.”

Nott’s introduction was followed by a panel session hosted by Cliff Fluet, partner at law firm Lewis Silkin, but also a longtime advisor to AI music startups.

“There’s a lot of nonsense spoken about AI – much of it from me!” he quipped, before offering some non-nonsensical thoughts on the direction of travel for AI technologies.

“AI has to be one of the most consequential, profound technologies in the world today. It will significantly change the way we do business,” he said, before offering a sobering thought.

“It is probably going to be one of the last ever human-only inventions. Pretty much every invention from now on is going to be in some shape or form AI-assisted.”

Fluet took the audience on a trip into the history books, going back to Aristotle in the fourth century BC, musing about a world where “every instrument could accomplish its own work, obeying or anticipating the will of others”.

(Including musical instruments: “the plectrum touch the lyre without a hand to guide them,” as the philosopher put it.)

After a brisk history of modern AI developments, Fluet pointed to the uptake of recent consumer-focused services. It took Instagram 30 months to reach 100 million monthly users, and TikTok nine months. But ChatGPT took just two months to reach that milestone.

Fluet also suggested that for the music industry, AI’s risks and opportunities are a new kind of Rorschach test. He flicked to a slide showing a hammer, and challenged the audience.

“If you perceive a weapon, that’s on you. You can also build things with it. You can make things with it. You can create with it. You can iterate with it,” he said. “AI is no different… It has the potential to become the single greatest tool the music industry has ever seen.”

Fluet also suggested that “copyright and AI can co-exist perfectly well” and set this in the context of the historic relationship between new technological developments, and interpretations and evolutions of copyright law.

“Like in all relationships – the question of who came first is somewhat controversial,” joked Fluet, before encouraging the music industry to approach AI “with the right mindset and the right levels of innovation and understanding of what it can do FOR us, as opposed to what it can do TO us”.

Fluet was joined by a panel of some of the companies hoping to use AI for musicians and their teams.

Rachel Lyske, CEO of startup Daaci, talked about her company’s efforts to create a tool driven by “composers’ intent” – which they can train on their work to output new music in response to briefs, for example.

“Out of all people, musicians and composers and creative people are really well equipped to be dealing with the new AI,” she suggested. “As musicians, we are technologists already… and we need to be aware of what this technology can do for us as technologists.”

Lyske also criticised the capabilities of some of the musical AIs that are coming out of big technology companies’ large-language models (LLMs).

“The LLMs are having a go with music. it’s missing one vital ingredient and that’s a composer… What makes music? It’s the composer, it’s the human. It’s the intent,” she said.

“The models they are doing are coming up with 20 seconds of passable stuff, but how are you supposed to tell a complex story with that? A film score or a soundtrack to a massively multiplayer online game… Human composers are absolutely essential.”

Daaci’s business is more about helping composers to create adaptable, personalised music, with games among the key use cases for that.

“There is no system now in the world where I can write simultaneously a bespoke score for everybody in this room, never mind 20 million players in a massive online game,” she said.

“We deserve to have some personalised music, but short of me sitting behind every single person’s screen, responding like a silent film pianist to what’s going on [in the game] and then sending my choices to the orchestra in your garden that’s ready to go…”

Daaci’s focus is on getting composers to put their ‘intent’ into its system, which will then be able to produce those kinds of soundtracks.

‘I don’t think it’s cheating. I do think it’s going to create change though’

Lydia Gregory, Figaro

Also on the panel was Lydia Gregory, CEO of Figaro. Its core business is in audio search technology, but earlier this year we wrote about its plans to help streaming services identify AI-generated music and deepfakes.

Gregory talked about the wider range of uses that AI technologies can be put to in music, noting that while the discussion tends to hone in on “the moment when music is being created”, the applications are much broader.

She also addressed a question by Fluet about whether using these tools might be seen as ‘cheating’ within the songwriter and composer community.

“Look at the history of technology. Look at loops and samples, or drum machines. 50 years ago, what would our parents and grandparents think about this technology? My mum would have probably called that cheating!” she said.

“I don’t think it’s cheating. I do think it’s going to create change though, and the pace of change is increasing. How do we support the humans in this industry with that pace of change, whether that’s reskilling or learning about the tools?”

Gregory also addressed some of the challenges around AI and music, and suggested that deepfakes should not be seen as a purely-negative trend.

“We can all agree that piracy and streaming fraud are problems to be dealt with. Voice cloning is a little more interesting. There are challenges there, but also opportunities, and I think the industry will come to embrace it,” she said.

Gregory said that since the headlines around the fake Drake / The Weeknd track earlier this year “there’s an enormous community – over 100,000 largely teenagers on Discord training these models. It’s really easy. So this is not something that is going to go away.”

She hoped that the music industry will not see this purely as a reason to panic, however.

“There are opportunities: there are going to be new business models. We’re seeing the barrier between creation and consumption blurring. If on TikTok there’s Frank Sinatra performing an Ed Sheeran song, it’s part of fan art. There’s a question about the business model – how you get the right money to the right people – but it’s also creative and fun.”

Gregory also talked fraud, noting that one big challenge around creative AI is that it makes it much easier to create huge quantities of music tracks. If you’re criminally-minded, you can then upload those tracks to streaming services and try to use bots and other methods to inflate their streams – and thus your royalties.

However, here too Gregory stressed that AI can play a positive role: technologies that help streaming services identify these patterns of fraudulent streams so they can be tackled.

‘Algorithms are made by humans for humans, and that’s really important’

Thomas Bouabca, Deezer

Deezer’s director of datascience Thomas Bouabca was also on the panel, although his focus was more on the use of AI within streaming services to recommend music and understand listeners’ tastes. He had some more generally-applicable views too though.

“With algorithms it’s not only machines or computers. Algorithms are made by humans for humans, and that’s really important,” he said. “At every step of the creation there is a human. It’s not just some code, some computers.”

That’s one of the most important points to remember in any debate about AI technologies’ impact on music and the music industry. They don’t exist in a vacuum: they are created by humans and they are used by humans, for better or worse. It is humans’ choices that will dictate how disruptive (in good and bad ways) these technologies become.

The panel ended with some positive thoughts. “We can’t be replaced by AI: it just won’t have the same emotional purpose,” said Lyske. “But you can potentially be on the back foot if you’re not aware of it, and thinking about what it means for you. What can we do with it? It’s up to us!”

Bouabca agreed. “We will still have [human] creators, but they will be enhanced by all these tools. It could help: it’s not magical, and it’s not demonic.”

Gregory said that the music community does need to make its views heard and help to shape these technologies.

“We don’t want to leave it up to the big tech companies, who have the biggest resources and can build the biggest models, to make those choices for us,” she said. “We have to put the time into building the systems that meet our needs. It’s up to us to decide, and I think we should.”

EarPods and phone

Tools: platforms to help you reach new audiences

Tools: Kaiber

In the year or so since its launch, AI startup Kaiber has been making waves,…

Read all Tools >>

Music Ally's Head of Insight