Are the new wave of artificial-intelligence startups here to eat musicians’ lunch, or to help them become even more creative? A panel run by Music Ally at the by:Larm conference in Oslo today explored the sometimes-sensitive issues around AI and music.
The panel included Scott Cohen from The Orchard (who stressed he was giving his personal opinions rather than speaking on behalf of the company); Sophie Goossens from Reed Smith; and Helienne Lindvall from Auddly. It was moderated by Music Ally’s Patrick Ross, who kicked off the session by playing Beatles-ish song ‘Daddy’s Car’, which came out of Sony’s AI labs in Paris (with the help of some humans).

Cohen responded in typically tongue-in-cheek form: “Are there musicians in the audience? You’re kinda fucked! Sorry,” he joked.
“Don’t believe him!” said Lindvall, before the panel got down to business, discussing whether AI can create music that’s good for more than just background tracks for online videos – the initial focus for startups like Jukedeck and Amper Music.
“It might start with library or background music but there’s nothing to say that a machine can’t compose music,” said Cohen, who related this to other AI/human debates. “Every time we say ‘no, this is the realm of only a human can do it’ somehow a machine ends up doing it better… but this isn’t about replacing humans. We adjust.”
Lindvall suggested that even when AI is creating ‘library music’ for use in online videos, games and other media, humans are still involved: judging what does or doesn’t sound good from the algorithm’s output.
“It can compose music, but even for human beings that have a high hit-rate like big songwriters, the magic of writing something that touches people across the world is intangible, but at the same time it’s a very human thing,” she said.
“I’ll use this comparison: photography. Everybody can take a photograph now. We had that incident where a monkey took a photo. Does he own the copyright or does the person who chose that particular picture own the copyright? I tend to the latter view.”
Cohen disagreed. “Humans have ideas in our head and we want to get them out. Sometimes it’s very tangible real things: we want to give instructions. but sometimes it’s abstract: poetry, paintings and music… abstract ways of conveying feeling and meaning and emotions, and we think only humans can do that,” he said.
But he went on to suggest that music itself is “mathematics” in terms of its notation and structure. “You could say that music can be described by mathematics, but there’s another way to think about it: that maybe all of this kind of art, all this music, it’s not just described by mathematics,” he said.
“Maybe it’s that mathematics is expressing itself. The math itself expresses itself through art, through music… So instead of just learning to play an instrument, you could learn to create algorithms so that the math expresses itself.”
Perhaps you’ll learn to play with the algorithms in order to make music. Goossens raised the question of AI opening up musicianship to a much larger group of people.
“Yesterday you had to be a musician to write beautiful music. Today with AI, maybe you just need to be musical,” she said, noting that she isn’t a trained musician, but considers herself to be ‘musical’ – she has a love of music.
“With the AI engines that are being released at the moment, I could play with it, and perhaps because I’m just musical, maybe I can create amazing things,” she said.
Goossens also compared music to photography: when everyone had a camera in their pocket (with a smartphone), it posed a challenge for professional photographers – some of the ways they earned money were impacted.
“Tomorrow if I have in my phone an AI engine that is allowing me to compose AI music, there is no doubt in my mind that there will be some adjustments that the industry needs to cope with,” she said.
Cohen talked about a recent sit-down with Brian Eno, who was showing him some AI that he had built himself. It starts with a drum machine. “Then he writes a bit of code, but it’s not ‘do this two times, do this four times’ but it’s creating averages, then it throws in a randomiser so sometimes he gets some cowbell… there’s a randomness to it,” he said.
“So this drum machine is playing the drums, and all of a sudden you’re playing bass along with it, and thinking ‘this is a good fucking drummer!’,” added Cohen, before explaining that Eno has been investigating the potential for AI to create lyrics based on an archive of newspaper articles, books and popular culture references.
“It spits things out, and a lot of it’s crap. But as a songwriter, not everything you write is good. Some of it’s crap,” said Cohen. “The machine can actually put together stuff that is surprisingly great. Most of it’s shit, but that’s songwriting everywhere!”
Ross segued the conversation towards the technology of British startup AI Music, which attempts to ‘shapeshift’ music by creating remixes on the fly: a collaboration with humans rather than replacing them as the core creator.
“AI is always a collaboration between a machine and a human. You have to train the AI to get something on the other side,” noted Goossens. “There’s always going to be someone who’s going to choose the training set of the AI. There’s no AI that’s going to be completely self-sufficient.”
“But then you have the person who decides what is the training set, and you can also have the listener also interact with the training set and see how that evolves. There’s always a collaboration between the machine, the person who wrote the code, and the person who chooses how to train that code.”
Cohen pointed to a larger intersection between music and technology that’s always sparked controversy.
“It reminds me of the people who didn’t like it when Bob Dylan plugged in. Well, fuck it: he did!” he said, before comparing this to the current tension around AI music creation. “Get comfortable with it. This is where we’re going. It’s not going to be the same. Is there a studio today that doesn’t use a ProTools setup? It used to be all analog and people talked about the difference. Now they all use it.”
“It [AI] will be a new toolset, a skillset. The skillset an engineer or producer needed in the 50s or 60s is very different from the skillset a producer needs in 2018, and 10 years from now the skillset will change again. You’ll need to know how to use this, or you won’t be in the music industry.”
What about AI remixing songs on the fly, at the behest of listeners, as AI Music has been working on? Cohen sees the appeal to listeners as key to this moving forward.
“All this seems completely natural. If you go back to classical music, there is the seed and then there’s variations on the theme. If you have a song, and you love this composition but you’re going running, it speeds it up, or flips it to a minor key when you’re said,” he said, as Lindvall pulled a face of mock disgust. “But I tell you what, if you’re still getting the royalties, you’ll be happy!” continued Cohen.
An AI remix opens up some interesting copyright debates, though. A song’s original writers will always own the copyright for the composition, explained Goossens, but further along the process, there are big questions around creative input.
“Under European law people writing code are also authors, so protected by copyright. The coder could say that his creation has played a role in what you’ve created. But you have to look at what the AI engine is doing too,” she said.
“The AI engines need training, so the AI that is going to remix your song has to be trained in remixing – what is a remix, what does a good remix sound like, what are the main characteristics of a remix? – so this AI can learn to make a good remix of your song.”
“That’s when it becomes interesting, complicated or a huge headache. Do you have to look at everything that this machine had to feed on in order to understand how to allocate ownership? Where does it start, where does it end? That’s where from a legal standpoint it gets really interesting.”
Goossens also talked about a recent news story about Elton John working with a company to create “a virtual copy of himself” capable of composing songs in his style.
“The idea is that even when Elton John will no longer be with us, his virtual self will still be able to compose music… That is blowing my mind!” she said, before noting that here, too, there are some legal issues to be sorted out. “What happens to the duration of copyright, because it’s supposed to be calculated based on the duration of the life of the author.” If the author is long dead but their AI self is continuing to compose, in other words, how does this affect copyright duration?
Lindvall brought the conversation back to remixes. “It’s a question of what is the copyright in a song, and what is just somebody playing? The difference is not just between human beings or a machine: there are even questions when there is a human being playing something.”
She cited an example: a young cellist who goes in to the studio and comes up with a part for a song, rather than being given sheet music. That would traditionally be seen as an arrangement, but shouldn’t she be seen as a co-songwriter on the overall composition?
“The line between what is creation and what isn’t has been blurred. But I would say that if a machine is doing a remix, that would not actually be a composition. It might make the difference when it comes to the master copyright, but not for the publishing copyright,” said Lindvall.
Cohen took another example: “If I taught a music student and they go on to write a hit song, I don’t say ‘it’s mine because I showed ‘em how to do it’,” he said. In other words, whoever trained an algorithm (or the creators whose works were used to train it) have no claim to a share of its creative output, in his opinion.
Ross raised the question of whether an AI can infringe copyright: if an algorithm creates a piece of music that’s too similar to something a human (or, indeed, another AI) has already written, who gets sued – if anyone – for copyright infringement: the coder(s) who wrote the algorithm? The corporation that owns the AI? The panel agreed that these issues have yet to be tested.
Goossens went back to ‘Daddy’s Car’, noting that it had been trained purely on Beatles songs. “It never fed on anything else. So obviously if you give to that AI machine the whole catalogue of the Beatles then say to the machine ‘create a new song with what you’ve been fed with’… that’s what came out of it. This song that sounds terribly familiar and at the same time a little bit weird.”
But she pointed out that there is a question here about what rights the Beatles have in this situation, although it will will surely be rare for a music-creating AI to be trained on a single artist in this way.
“I’d be curious if you fed it with the Beatles and Britney and One Direction. That would be something completely different!” joked Lindvall.
“And that’s what Brian Eno was doing. But you weight the machine more with different things… this gets a two and that gets a 10, and then it spits something out,” added Cohen, sending the audience into a reverie about Brian Eno feeding ‘Toxic’ and ‘What Makes You Beautiful’ into his algorithms.
Finally, the conversation quickly turned to AI for music discovery: generating playlists, analysing catalogues and predicting what songs will be popular.
The panel ended with a discussion of AI as musical curator and recommender, with Lindvall suggesting that the music industry needs to continue thinking about the power of voice assistants like Amazon’s Alexa, Google Assistant and Apple’s Siri. She cited a recent presentation by Amazon.
“By far the biggest demand from people who use Alexa was ‘play me some music Alexa’, a lot more than anything else. And the second biggest request was ‘play me some kids music Alexa’… the power of these smart speakers is going to be huge, and people have so little to go on.”
Cohen said that the music industry will need AI curation whether it likes the idea or not, to handle the twin tasks of listening to (or analysing) the tens of millions of musical recordings, and understanding the tastes of each of the hundreds of millions of people listening to them.
“If there’s 30m tracks in a music service… you could listen, if you listened for 24 hours straight to between 400 and 500 songs a day. So to listen to 30 million might take you 150 to 200 years if you never slept… So you take 150 years and have to hear all the music, and then you have to know the music tastes of 150m different people? I don’t think a human can do it effectively. We’re in a position where there’s so much that you’re actually going to need a machine to do this for us,” he said.
There were plenty of unanswered questions from the debate, but Cohen delivered one of the more memorable zingers of the panel when he suggested that the grey areas may ensure humans still play an important role.
“Maybe the idea is everyone will lose their existing jobs, but we’re going to need everyone back in to figure this shit out!” he said.