It’s probably fair to say that AI will change the music industry and lots of other industries a lot more than the internet did.”

Ed Newton-Rex, CEO of startup Jukedeck, set the tone at last night’s ‘Music’s Smart Future’ event at the BPI’s headquarters in London. His company has developed artificial intelligence (AI) technology capable of composing music, with more than 500,000 tracks under its belt already.

The implications for the music industry of AI composers was just one of the topics discussed at the conference, alongside machine-learning based music recommendation tech; music-focused chatbots; and smart voice assistants like Amazon’s Alexa and Apple’s Siri.

Music Ally produced a report for the event about trends and new technology in these areas, which you can download here. We also sat in on the talks and panel session.

ferrar

How Bastille got bot to trot

Luke Ferrar, head of digital at Polydor, kicked off with some thoughts on the Facebook Messenger chatbot that was recently created to promote British band Bastille’s new album, sending fans news, GIFs and video clips in the guise of an “evil company” called WW Comms.

“It was a bit inconsistent to start with and we ran into a few issues, but it was an attempt: that’s the important thing here,” said Ferrar, who is already working on more chatbots for artists. “The rate at which they’re improving is phenomenal.”

He stressed the need for creativity when launching chatbots for artists. “If we’re just repurposing content that is available elsewhere we’re not necessarily adding value to the campaign. We should be creative in how we’re using bots,” said Ferrar.

“We’ve got to be creative… A lot of people will be looking at bots now and what they do and how they perform, and it’s our duties as marketers to make sure they’re performing to their best extent.”

Ferrar recommended some other chatbots worth checking out, including Channel 4’s bot for TV show Humans, and fashion brand Burberry’s official bot. He also said that Indian music-streaming service Gaana is ahead of the crowd with its chatbot, which sends a daily playlist to users.

“The implications of that are massive, and what the retailers are doing in that space will be interesting,” he said. “It won’t be long until we’re seeing bots as playlists: subscribe to a Spotify bot and the playlists you’re most interested in.”

Ferrar suggested that as chatbots get smarter, they’ll become a natural part of music fandom, but he warned that labels need to think about how they sit alongside other marketing channels.

“We’ve got to be careful though that we don’t cannibalise our artists’ social media and reduce web traffic. It’s scary so much functionality being built into messaging, because it means we’re going to spend less time elsewhere,” he said.

Ferrar also said that chatbots can get much, much smarter: the current examples are mostly scripted multiple-choice interactions, rather than “true” artificial intelligence.

“I don’t think there’s enough listening yet… if we start to listen to more people and have more data sources, we can make them more intelligent,” he said. How? “A bot will be able to recognise guilty pleasures… see that I’ve been to the pub and serve me a Little Mix record when I’m on the way home!

But he said chatbots can ultimately become powerful marketing tools, building up databases of fans who’ve interacted with the chatbot, and then ensuring that every single one of those fans gets contacted when there is something relevant to tell them about.

“We’re at a very basic level at the moment, and obviously we’re still learning, but as we get more and more data we’ll be able to personalise these bots further and further… and make recommendations based on that individual,” he said.

“We need to make them functional, fun, helpful and useful… and they need to add value to what’s already existing.

jukedeck

Jukedeck and the rise of AI composers

Newton-Rex spoke next about Jukedeck, comparing what’s happening now with artificial intelligence to the disruption that swept the music industry after the original Napster launched in the late 1990s.

I think that AI is probably going to be the same but a hundredfold. It’s probably fair to say that AI will change the music industry and lots of other industries a lot more than the internet did,” he said.

Newton-Rex explained how Jukedeck is using ‘neural network’ technology to compose music, boiling it down to the essential points required to understand the process.

“You pass it a little tune, you pass it a few notes… it learns what note should come next. And you do this and you find that neural networks are very able to write pretty decent music,” he said, showing how Jukedeck’s compositions have improved since its first efforts in 2012.

The company’s first customers are YouTube channels and creators, who want to quickly create music to use in their videos. “It’s royalty-free and cheap. It’s a cost-play, AI,” he said. “In the early stages at least.”

Newton-Rex addressed the question of whether startups like Jukedeck are going to put composers out of work, pointing out that his background is as a composer, not a coder.

Are all composers totally screwed? Is it going to take people’s jobs? The short answer, you’ll be glad to hear, is no,” he said, suggesting that “art is about more than just the work itself”. In other words, when people fall for an artist, it’s for their personality and their back-story, not just for the music.

“I don’t think AI is going to attract the screaming fans that Justin Bieber attracts. Not any time soon, anyway,” he said, before adding that humans’ impulse to write music is unlikely to die out.

YouTube video

To illustrate this, he pointed to another profession: truck drivers, noting that it’s one of the most popular jobs in the US, with 9m truckers on the roads. Yet it’s also a profession threatened by the development of self-driving trucks.

There are probably going to be no human truck drivers by 2020, certainly by 2025… but this isn’t going to apply to music. I don’t stop writing music because I don’t have a job or because Hans Zimmer is better than me… I’m not worried about the future of composing,” said Newton-Rex. Instead, he stressed the positive aspects of AI music composition.

“There’s just going to be more art, more music in the world when computers can be creative, and this is a good thing in my book as long as it’s decent music,” he said, before suggesting that AI technology will also make music more accessible.

“Making music is sort of an act for the elite… what we’ve seen on our site is we’ve seen a bunch of people who don’t have the skill to write a backing track for themselves. They’ve made a backing track then written a tune over the top and turned it into a music video,” he said. “AI is going to let a lot more people make music, which i think is really exciting.”

Newton-Rex added that AI will be able to make music more personalised, taking it a step beyond the current ability for streaming services like Spotify to serve up playlists like Discover Weekly and Daily Mix for individual users.

“AI is going to let su really personalise, not just track by track, but note by note, to the power that a film score has. This is how personalised music is going to be when essentially the thing in your pocket is writing it for you,” he said.

cate

Recommendations and AI/human collaborations

The final two presentation came from Moodagent and IBM Watson, addressing different aspects of how artificial intelligence can be used with music.

Moodagent uses neural networks and signal-processing to ingest a catalogue of music and categorise it: for example by mood, genre or the instruments used.

“It is about understanding what is going on inside the music signal: what are the musical characteristics of the audio itself?” said CEO Peter Berg Steffenson.

The technology means Moodagent can create metadata about individual tracks, so that it can be written into the catalogue and made the subject of a search: for example, making it easier to whip up a playlist of melancholy or romantic songs.

“You can start to make discovery solutions based on that,” said Steffenson, who also talked about the business-to-business applications for this technology: for example, analysing a label’s catalogue and then grouping tracks by mood in a visualisation, which might help when working on sync pitches.

“By having models like this you can shoot in tracks into a galaxy like this and find out if they are good candidates similar to very well-known stuff,” he said. “Does this track sound a lot like this Bruno Mars track? And you can use that to find other tracks that are similar, and answer pitches made by clients.”

For now, the focus is on finding where new releases fit into playlists. “But the next thing we’re discussing is evaluating during production whether aspects of the track fit into the schema, the profile of what the artist wants to do,” he said.

Steffenson cited the famous example of Bruno Mars’ ‘Locked Out of Heaven’, which was compared to The Police’s ‘Message in a Bottle’. He hopes the kind of technology Moodagent is developing could give early warnings in those cases.

“In the Bruno Mars example, are we getting too close to The Police, could we avoid that by doing something differently?” he said.

Cate Cowburn, IBM Watson channel lead, spoke last in the event’s demonstration section, explaining how IBM’s AI technology is being used for music.

She outlined the three pillars of the technology: first, the ability to understand natural language by ingesting a phenomenal amount of information: “The equivalent of every man, woman and child in the world reading 350 newspapers every day,” as she put it.

Second, to reason: analyse data and present hypotheses based on that data, for review by a human expert. And third, the ability to learn, improving its understanding and expertise with each interaction.

Cowburn talked about a couple of music projects using Watson, including one with producer Alex Da Kid. Watson took in information from the last five years – “New York Times articles, Nobel Prize-winning speeches, lyrics… and then over 2m lines of social media around those topics to find out what the sentiment was around it,” she said.

YouTube video

The result was “an emotional fingerprint of the last five years” which Kid then used to inspire his own creative process. The AI’s output wasn’t replacing him, but rather stimulating him.

Cowburn also talked about an AI tool called Watson Beat, which has been trained to have an “extensive knowledge of music history as well as all the characteristics of music keys”.

Play it 20 seconds of music and it will “deconstruct that and reconstruct a new melody” based on the instructions of the app’s (human) user.

Cowburn also talked about a project that involved Watson creating a melody, and then handing it over to musician Aliocha Thevenet, who wrote a song inspired by it. “Computers are an aid to the creative process but it is the human that puts the art in,” she said.

Copyright, creation and the singularity

The BPI’s event continued with a panel session introduced by CEO Geoff Taylor, who agreed that “we’ll see musicians embrace AI” before turning his thoughts to the longer-term implications of this technology.

Taylor referenced Google futurist Raymond Kurzweil’s prediction of the “technological singularity” to come (around 2045 in Kurzweil’s opinion) when AI will reach a tipping point of intelligence and self-awareness.

“It’s when AI machines become conscious and start to think for themselves. If that’s the case, and machines start to think independently, will the music that they create engage human emotions in the same way that human creativity does?” said Taylor.

“And when they become independently conscious, what will their experience of consciousness and life enable them to create?”

Taylor also raised the question of whether, post-singularity, AI entities will be able to earn copyrights for the music that they create – pointing to the recent legal debate over whether a selfie-taking monkey owned the rights to its photo.

Animals can’t earn copyright. What about a machine that creates a song?” said Taylor. “Who will own the rights: the person who programmed it or the machine? So there are some interesting long-term issues for us to think about.”

panel1

The panel addressed some of those issues, starting with the potential threat that AI composition poses to human musicians.

James Healy, VP of global digital business at Universal Music, agreed with Cate Cowburn about the appeal of human artists being more than their music.

“People want to buy into an artist, to buy into the artist’s back-story, what they’re about,” he said. “If you’re writing library music, corporate background music, this [AI] is going to start eating your lunch sooner rather than later.” But Healy doesn’t see an existential threat yet for other artists.

James Bassett, head of digital creative at Sony Music UK, agreed in the short-term, but acknowledged that the longer-term impact of AI remains unclear.

No less a person than Elon Musk reckons AI is the biggest issue facing humankind in the foreseeable future. And that’s a guy with colonising Mars on his to-do list!” said Bassett, who also agreed with Taylor that in musical terms “AI emotion may not be the same as human emotion”.

Evan Stein, CEO of music search engine Quantone, brought the conversation back to AI as a tool for musicians and other people in the music industry: something that can help them automate the “grunt work” in their jobs, freeing them up to be more creative in other ways.

“Why say no. This is the way that things are going, and it’s making our lives much, much easier,” said Stein. However, he did question the likelihood of AI becoming the sole source of music discovery for humans.

“It’s fanciful to say a machine is going to tell us what we want to hear,” said Stein. “I’m not sure a machine will be interested in human music anyway, it will be much more interested in being a machine, which we don’t understand.”

Gregor Pryor, partner and co-chair of law firm Reed Smith’s global entertainment and industry group, talked about some of the copyright implications, addressing Taylor’s question about who would own the rights to songs created by AI.

“Most people agree now that this would be the creator of the software,” said Pryor, although he warned that the most recent case related to this question was back in 1985.

“If you look at the developments in technology since 1985, they’re so vast and different. It’s something that’s being considered by legal academics and not the courts. There is definitely not clear law on what that looks like going forward,” said Pryor.

He noted the separate questions about who owns the copyright to a musical work – the song – and any given recording of it. Current law around the latter is just as much of a grey area as the former.

The first owner of the sound recording under current law is the producer. But the law doesn’t recognise that the producer could be a computer,” said Pryor.

The conversation turned to chatbots, with Bassett returning to the earlier point about the current generation of artist bots not being true AI that learns from each interaction and develops its own responses.

“It’s not doing anything itself yet,” he said of Olly Murs chatbot. Bassett used Microsoft’s infamous Tay chatbot as a cautionary example. “It went from zero to racist very very quickly. That would have been pretty destructive to Olly Murs!”

panel2

Bassett also highlighted the fact that AI technology will pose all kinds of questions for the music industry in the future that aren’t related to its specific use for music.

The AI that is going to affect the music industry most is self-driving cars,” he said, noting that one reason so many drivers listen to music is because it’s a passive, background form of entertainment that can happen while driving – unlike playing a game or watching a film.

“If I have a second or third generation self-driving car, as I will in 15 years time, I can sit in the back and watch a movie, play with my kids, thumb through Twitter. Am I still going to listen to music?” said Bassett, before referring back to Newton-Rex’s earlier comments about self-driving trucks.

“When those nine million truck drivers aren’t in their trucks any more, I worry a bit about what will happen to radio,” he said.

Questions from the audience included fears of an inherent bias in future AI technology if its developers are drawn from too narrow a demographic group.

Pryor admitted that both the tech and music industries need to do better on diversity, so the only way to avoid any inherent bias in AI is to build a more diverse workforce. Bassett agreed, although he added that “the inherent bias may not last too long once the machines can truly learn and think for themselves”.

The panel were asked if a “sentient being could become the perfect pop star” in 40 years’ time, able to sing and converse in a host of languages and interact with fans in ways that a human star simply couldn’t?

“Absolutely, in my lifetime, I would think. Almost 100%” said Bassett. However, the panelists weren’t so sure about a question on whether AI will ever be able to imagine and create an entirely new genre of music.

“It’s probably likely to accidentally create a new genre in the early stages by not being good enough!” said Healy, although Stein returned to the idea of AI as a tool for human creators.

It might help somebody to create a new genre. ‘Can you mix the blues with Indonesian music?’ and see what comes out,” he said. “Someone’s going to have to tell the machine what to do, and somebody’s going to have to tell it if it’s good or bad.”

You can download our ‘Music’s Smart Future’ report from the BPI’s website

Music Ally’s next Learn Live webinar will help you understand what’s required for artists to thrive in new international markets!

Music Ally's Head of Insight

Join the Conversation

3 Comments

  1. Seriously, have you heard the latest examples of pure AI (albeit human assisted) composition? Long way off. I think we can get to background music acceptability, but something like a Dylan lyric, or Miles Davis jam, Jimi Hendrix solo, Sibelius Symphony? How about a concept record like The Wall? Gimme a break. Toys for generative ideas. That’s where it is now. AI can get back when it passes the Turing test.

  2. Seriously, have you heard the latest examples of pure AI (albeit human assisted) composition? Atrocious. Long way off. I think we can get to background music acceptability, but something like a Dylan lyric, or Miles Davis jam, Jimi Hendrix solo, Sibelius Symphony? How about a concept record like The Wall? Maybe a few Shakespeare plays in its spare time. Gimme a break. Toys for generative ideas. That’s where it is now. AI can get back when it passes the Turing test.

  3. These people are idiots. They are pedalling gimmicks and they no nothing about AI. One of them even says that his chatbot wasn’t a success yet they still pile the money in? This is why the music industry is failing.

Leave a comment

Your email address will not be published. Required fields are marked *