Let’s start this with the obvious introduction: koalas, kookaburras and Tasmanian devils!
‘Beautiful The World’ is a track which was created using an AI that had been trained on a catalogue of music, and also audio samples of the above-named animals. Yes, if you hadn’t guessed, it was created in Australia.
This was the winning entry in the 2020 AI Song Contest, a global competition for musicians, scientists and developers – essentially the Eurovision Song contest of AI music. Which, perhaps, makes this song the AI equivalent of Abba’s ‘Waterloo’, Bucks Fizz’s ‘Making Your Mind Up’, or Måneskin’s ‘Zitti e Buoni’.

That was a snapshot of what creative AI technology was capable of in 2020, but it’s important to understand that computer music is not a brand new thing, either in theory or practice. As far back as the early 1950s, a computer called CSIRAC was used to play music – Colonel Bogey – although it could not compose original music.
From the 1950s onwards, there have been researchers exploring algorithmic composition using computers around the world. And in later decades, this coincided with the development of electronic music instruments like drum machines and synthesizers, which further blurred the boundaries between humans making music, and machines making music.
Actually, these blurry boundaries are an important principle: most of the time, we are not talking about machines (or now AIs) making music entirely on their own. It is about humans making music using machines or AIs, with varying degrees of input by the two parties.

Early days of AI music as a business
The focus of this primer is on the modern era of AI music, however. For Music Ally, our coverage of this area began in December 2014, when a British music/tech company called Jukedeck won a startup pitches prize at a big tech conference in Paris called Le Web.
The company had built an AI system to create unique, royalty-free music tracks, which could be used by online video creators, games developers and businesses.
Its pitch carefully stressed that it wasn’t trying to put musicians out of work. “Composers are the bedrock of the musical world (we’d know – we’re composers ourselves!). We just know that not everyone has access to a composer – and that’s where we come in.”
A year later, in December 2015, Jukedeck launched its service officially, letting people create five songs a month for free, then charging them $7 per track after that – and $150 if they wanted to actually own its copyright.
A few months later, in February 2017, a US firm called Amper Music was one of the startups in the first cohort of the Techstars Music accelerator.
It was very similar to Jukedeck, including the fact that its founders were music composers themselves. Like Jukedeck, Amper positioned itself as an alternative to production music libraries, with an AI system capable of generating tracks in the mood, style and length set by its customers.
It used a subscription model, including offering royalty-free, global licences to make use of the music. And here too, the company was keen to not be seen as bad news for human musicians.
“Our perspective is that this is a tool to help enhance the lives of creatives, whether they are musicians or not. It’s a collaborative partner; an enhancing technology rather than a displacing technology,” said its CEO.
Both of these pioneering companies have since been sold. Jukedeck was acquired by TikTok’s parent company ByteDance in 2019, while Amper Music’s assets were bought by photo, video and music library Shutterstock in 2020.
That’s a pointer to the interest being shown in AI music systems and startups by far bigger companies, which we’ll explore in more depth later.

Uses for AI music: Production music
Now it’s time to talk about some of the uses for AI music, with examples. We’ll start with what Jukedeck and Amper Music were doing: production music.
Their systems were creating it on-demand: customers would tell the system what length track they wanted and other information about its style and mood, then the AI would spit out a track which they could accept, or reject and get another one.
Another take on this came from startup Amadeus Code, which originally developed its AI as a compositional aid for musicians. However, in 2019 it launched another service: a royalty-free music library called Evoke Music, using tracks created by its system and then chosen, or curated, by its human team.
These pivots are not unusual in the AI music world, as startups look for business models to match their clever technology.
Another example is Berlin startup Loudly, which started life as a tool for music fans to create AI-assisted remixes of songs. When that ran into obstacles – rightsholders were not keen to license it – the company switched tack and built a tool called Loudly AI Music Studio.
It harked back to the original services of Jukedeck and Amper Music: a tool for YouTubers, games developers and other businesses to create royalty-free music for their content.
Another European startup that has been exploring AI music’s B2B potential is AIVA, which is pitched as “the artificial intelligence composing emotional soundtrack music.
It is targeting customers including games developers, online video creators and businesses, with Nvidia, Vodafone and TED among its clients for the latter.
It is free for non-commercial use, although AIVA retains the copyright to the music. There are higher tiers for commercial use: one where AIVA keeps the copyright, and one where the customer owns it.
Another interesting thing about AIVA is that it was the first AI music company whose AI was officially recognised as a composer by a collecting society: Sacem in France. However, AI authorship is a controversial topic that we’ll return to later.

Some startups in the AI production music field have zeroed in on specific sectors. Infinite Album is a good example. It focuses on AI-generated music for livestream gamers: people broadcasting their gaming on platforms like Twitch, who don’t want to get hit by copyright takedowns if they use commercial music in the background of their streams.
Infinite Album doesn’t just pump out original music for these creators to use however. Its system can adapt the music to the game they are playing in real-time, while their viewers can also influence the genre, emotion and instruments being used by spending ‘bits’ – Twitch’s tips economy currency – which in turn generates revenues for the creator.
Infinite Album works with a number of popular games used by streamers, including Fortnite, Apex Legends, Valorant and League of Legends. Its beta launched in March 2022.
DAACI is another startup in this area, which emerged in June 2022 with plans to raise $5m of funding. It has built an AI that “composes, arranges, orchestrates and produces” original music, with dynamic music scores and game soundtracks among its uses.
Beatoven is a startup in India that is making its musical AI available to video and podcast creators. They upload their content to its site; mark their edit points for different sections of the recording; and then choose from 16 moods and five musical genres for each section. Beatoven’s AI then composes music to match those requirements. It raised $1m of funding in early 2022 to continue building its technology and service.

Another new entrant is Soundful, which raised $3.8m of seed funding from investors including executives from UMG and Beatport in April 2022. It’s a tool for creating “ideas or hooks and beats” using AI, with social media influencers among its target customers – the idea being they can create their own music which won’t result in copyright takedowns for their videos.
There was also Venturesonic, a spin-off from a UK-based AI music startup called, well, AI Music! It teamed up with a sonic branding firm called Made Music Studio to launch a service to create customised tracks for brands to use. Its early customers included Publicis Media, Virgin Hyperloop and Polaris.
With AI Music having been acquired by Apple, Venturesonic’s current status is unclear though.
Another aspect of production music where AI systems are working is in the creation of loops, samples and effects. That’s been a very successful business for companies like Splice and Tracklib, which have large databases of audio created by human musicians.
Tapes.ai is a startup trying to take them on with AI-generated sounds. In 2021 it launched its service claiming to be “the future of sample packs”. Its AI creates all the sounds, with its human team then curating them into packs which people can buy – more than 5,000 loops per pack.
Splice, too, is experimenting with AI: a sampler/synth called CoSo that offers ‘complimentary sounds’, listening to an audio source then plucking suitable samples from its database. So, not an AI creating music itself, but more involved in the curation of sounds during a human’s music-making process.

Uses for AI music: Functional music
Now let’s talk about the use of AIs to generate functional music. That means music used for a specific purpose: like helping you to relax; to work or study; or to get to sleep.
Endel is a US startup whose technology promises “personalised soundscapes to help you focus, relax and sleep”. It is delivered through a mobile and desktop application, as well as an Alexa skill for Amazon’s smart speakers.
The app works by getting people to tell it what they are doing, then generates a musical soundtrack to suit it.
Some of that music is a collaboration between Endel’s AI and human artists. In 2020 it worked with Grimes to create what she described as an ‘AI lullaby’ that was delivered through Endel’s apps. Grimes created the original music and vocals as stems, which Endel’s AI then remixed into a soundscape.
In 2021, it repeated the trick with electronic music artist Richie Hawtin, aka Plastikman. Again, he provided Endel with a collection of musical stems, and the AI turned them into music designed to help people focus.
Music Ally interviewed Hawtin and Endel about the project. Endel’s CEO described the process as “extracting his DNA in the form of a stem pack, and feeding that into the algorithm”. Meanwhile, Hawtin described it as giving “the best possible building blocks to the AI” so that it could create music that “would still make sense within my sonic language, or vocabulary”.

Endel is finding other ways to get its functional music heard. In 2019 it launched a Twitch channel devoted to ‘sleep music’ for example, streaming round the clock.
In 2021, it worked with Mercedes-Benz to create an adaptive soundtrack for drivers, to keep their concentration levels high and their stress levels low. Endel’s AI used signals including the car’s speed, driving style, the weather and road type to sculpt its music.
Endel has also turned some of the music created by its AI into traditional albums, and released them on streaming services. It started this in 2019 through a distribution deal with Warner Music Group – inaccurately reported at the time as a ‘label deal’. It has since released more albums.
How is this competing with human musicians? When we checked in February 2022, Endel had just over 37,000 monthly listeners on Spotify. At the time of writing, in July, it now has more than 180,000, buoyed by its recent collaboration with James Blake.
However, Endel’s music has made it onto some of Spotify’s ambient music curated playlists: Submerged, Ambiente and The Quiet Club, so some of that streaming service’s human curators see value in it. Investors do too: the company raised $15m in April 2022.
There are other startups working on AI mood music. Aimi is one example. It launched an app in 2020 which used AI to adapt music by human artists into electronic-music soundscapes – much like Endel has done with Grimes and Plastikman.
The app was free to try, with a monthly subscription for people who wanted to use it for more than 30 minutes a day. In November 2021, Aimi raised a $20m funding round, and hinted at plans to use blockchain technology to pay its human creators their royalties.
Mubert is another startup that began life as an app offering AI-generated music to help people sleep, focus and meditate, across a range of genres.
Since that original launch in 2018, however, it has diversified into other areas: for example a B2B soundtrack-making tool for video creators called Mubert Render; and software for artists who want to use AI in their creative process called Mubert Studio.
It’s another example of one of those pivots, where a startup builts a creative AI with one particular business model or use case in mind, but realises over time that it may suit other uses better – or at least have a better chance of building a business out of them.

Another example of a startup making purposeful AI music is LifeScore, which emerged in 2019 co-founded by British composer Philip Shepherd, and Tom Gruber, who co-founded Siri – the startup that Apple bought to use for its voice assistant of the same name.
LifeScore’s technology wasn’t creating music from scratch using AI. Instead, the company recorded human classical musicians, then split that music into smaller fragments, which LifeScore’s AI then stitches together into adaptive music.
The initial demo involved an app where this music changed as you walked around, matching your pace or turns. The company has also created a reactive musical display for Twitch’s headquarters, and worked with Bentley to create adaptive music for drivers – a similar partnership to Endel’s with Mercedes-Benz.
In March 2022, LifeScore raised an £11m funding round, with major label Warner Music Group one of the investors. The company is planning collaborations with artists as part of its growth.

Uses for AI music: Creative tools
Now it’s time to talk about some of the ways creative AIs are being used by professional musicians – human ones! – as part of their songwriting and recording process.
We talked about Amadeus Code earlier on. Its original product was pitched as an “AI Songwriting Assistant”: a mobile app that would generate melodies based on hundreds of chord progressions learned from being trained on a catalogue of classic songs.
These melodies could then be exported as audio and MIDI files to a digital audio workstation, for the musician to use in their projects. So, rather than replacing them, Amadeus Code was designed as a creative tool: something that could perhaps prod songwriters out of their comfort zone, or nudge them out of writer’s block.
Algoriffix is another startup exploring this idea. This Swedish startup unveiled its app in September 2021 pitching it as ‘AI as your co-writer’ to human musicians. It gets people to upload their unaccompanied solo or stem for its algorithm to recognise notes and recommend the best meter and harmony to go with it.
The company was hoping to appeal to both established musicians and younger music students, as a tool for bouncing off when writing new songs.
British startup Vochlea is another example of AI being put to use to enhance the creativity of human musicians, rather than attempt to replace it.
The company’s first product was a combination of hardware – a microphone – and software that took the vocals of a musician and used them to control instruments. For example, you could beatbox into the mic, and it would create a drum track based on those sounds. Or you could hum to create a bassline.
Vochlea’s technology aims to help humans get the sounds out of their head and onto their digital audio workstations. In September 2021, Vochlea launched its second-generation product, a software-only desktop app which musicians could use their own microphones with.

Artists building creative AIs
Some of the most interesting examples of assistive musical AIs have been built by – or at least with the close involvement of – musicians.
Holly Herndon is a key figure in modern AI music. For her 2019 album ‘Spawn’ she and her team built an AI named Spawn, training it on a catalogue of samples of her voice as well as other singers, then using it as a vocalist in its own right on the album.
Then, in 2021, Herndon launched something called Holly+, working with a startup called Never Before Heard Sounds. She described Holly+ as a ‘digital twin’ for herself. People can upload polyphonic audio, and get a version to download sung by the digital Holly.
Herndon is also building a community and business model around this in the form of a DAO – a decentralised autonomous organisation, one of the big trends in the current web3 movement alongside NFTs and blockchain technology – through which music created with Holly+ will create a flow of royalties that can be used to continue developing the technology.
US group Yacht also built an AI system for their ‘Chain Tripping’ album in 2019, training it on their entire back catalogue so it could create new, original music and lyrics, which the band then shaped into the album’s 10 tracks.
Yacht decided that they would not add anything to the music or lyrics that the AI produced: they could only take things away. Here, the AI did not replace the creativity of the human musicians: it played a key role in the process of making ‘Chain Tripping’, but the humans were still the driving force – both through their past music that the AI was trained on, and their decisions over how its output should be used.
If you’re interested in AI music, it is definitely worth your while following the work of Dadabots, a collective of musicians and developers that has created a series of playful and thought-provoking projects over recent years.
“We make raw audio neural network that can imitate bands,” is how Dadabots describes its work. Those projects include Relentless Doppelganger, a YouTube livestream of “neural technical death metal” with visuals to match.

Dadabots were also behind Outerhelios, another 24-7 livestream of generative music. Except this time the genre was free jazz, and through a partnership with NASA, it was broadcast from the Voyager 3 space probe.
Dadabots AI systems have also created a neverending stream of music inspired by Cannibal Corpse; a neural beatbox video for a Bell Labs documentary; and a playful deepfake project using AI to simulate Nirvana covering Gorillaz. Oh, and they entered the 2022 AI Song Contest with… Nuns in a Moshpit!
This is experimental art and technological research more than it is a commercial startup, but Dadabots’ work is helping to push the art and science of AI music forward just as much as the companies we have been profiling.

Uses for AI music: Helping non-musicians make music
We’ve talked about professional musicians using creative AIs, but there is just as much to say about this technology being used by non-musicians to make music.
Australian startup Splash is a good bridge between these two topics, in fact. Music Ally has been writing about the company since 2017, when (as Popgun) it unveiled an AI called Alice who could play piano with humans, responding to what they played with its own melodies.
This AI was soon put to other uses: a product called Splash Pro which musicians could use to get AI-generated compositions with piano, bass, drums and vocals to export to their digital audio workstation and use in their projects.
However, Popgun was also turning its AI to consumer uses with a product called Splash. It began as a mobile music-making app where people could create songs by tapping on a grid representing loops of beats, loops and sound effects. These sounds were downloaded in themed packs,
The twist was that all those sounds – all those building blocks, to use Richie Hawtin’s expression from earlier – were created by Popgun’s AI. And Splash offered people with no musical training or instrument skills a way to start making music.
However, it stepped up several notches in May 2020 when Popgun took Splash to games platform Roblox, with an experience in which people could use the tool for short-form DJ sets, creating music while other players danced and chatted.
More than one million people played it in the first 20 days, and by November 2020 that had grown to 21 million people. Since then, Splash’s Roblox game has expanded with new locations and activities, while Popgun rebranded as Splash in November 2021 after raising a $20m funding round to continue growing.
So, Splash has helped millions of people to create music – many of them kids – using elements created by AI. In terms of consumer reach, it’s the biggest AI music project by far.

We are also seeing creative AI technology make its way into some popular existing music creation apps. BandLab is one of the biggest: more than 50 million people use its app to record and share music.
In March 2022 it added a feature called ‘SongStarter’, which generates beats, melodies and chord changes on demand for BandLab users, who can prompt it with lyrics and even emojis! The company said this was “just the start” of its expansion into assistive AI tools.
Another example of an AI music tool for musicians is Starmony, which launched in July 2021 out of Sweden. It’s in the same ballpark as Vochlea: an app that promises to let artists or producers “create a whole song just by using your voice”, and then share the results.
When the company raised $3.4m of funding that year, it said it was hoping to attract the next generation of artists: “those who are creative at TikTok and sing against ready-made backgrounds”. Its co-founder used to be the boss of X5 Music, a streaming compilations firm acquired by WMG in 2016.
Another AI music startup focusing on non-musicians is US-based Boomy. It launched in 2019 with a tool that involved setting parameters including genre and style, to get a piece of AI music created by its system.
If you don’t like it, you can simply press a button to get another one, until you find something that suits. You can also edit the track – moving, adding and deleting bars for example. This is very similar to what Jukedeck and Amper Music were doing a few years ago.
However, Boomy’s focus is different: rather than creating cheap production music, it wants people to commercially release the tracks they make using its system by compiling them into albums.
Boomy distributes music to more than 40 DSPs including Spotify, YouTube and TikTok, then shares the royalties for each track with the user who made it. The user gets 80% of royalties, and while Boomy owns the copyright, the creator can also use the track for most personal and commercial uses – for example in their own videos and podcasts.
I tried Boomy out in 2019 by releasing two albums of tracks created using its AI.
Professional musicians shouldn’t be worried: my total earnings since then are still less than five dollars! But overall, Boomy users have created more than 7.5 million tracks since it launched – which the startup estimates as just over 7.7% of the world’s recorded music.
Uses for AI music: Virtual artists
One of the most intriguing uses for creative AIs is creating the music for virtual artists – an emerging area of avatar pop stars! So, this is virtual characters who are releasing music as if they were human artists.
These aren’t new, of course, but in the past the actual music has still been made by humans. Behind Gorillaz, there is Damon Albarn and his ever-revolving group of guest artists for example. Meanwhile, the game League of Legends has spawned several virtual groups, but here too the actual music was made by humans.
But what happens if you mix AI-generated music with virtual avatars? Some companies are trying.

Authentic Artists emerged in April 2021 with funding from media executive James Murdoch and Linkin Park frontman Mike Shinoda. It later also took investment from WMG.
It is building both virtual artists and a creative AI to provide their music. Its characters include dragons, cyborg humans and, ahem, rabbits. The music is made by AI, and in 2022 – perhaps inevitably – the company is wrapping NFTs around all this too.
Auxuman was a startup co-founded in the UK by musician Ash Koosha. It created an avatar artist called Yona, and built an AI system to create music for her, and then a bigger group of virtual artists in 2019.
It has since pivoted however, to work on something called auxWorld: a “metaverse experience platform” due to launch later this year.
The blend of AI music, avatar artists and the metaverse can also be seen in what virtual reality startup Sensorium is doing. It is building a virtual world called Sensorium Galaxy, and music will be a big part of it.
Some of that music will come from human DJs: it has deals with David Guetta, Carl Cox, Steve Aoki and Charlotte de Witte among others. But Sensorium is also building its own avatar DJs.
This is Kara Mar, the first example. A virtual techno DJ who will perform within Sensorium Galaxy, as well as interacting with fans through text chats and video calls. But importantly, the music she plays will be generated by AI – specifically the AI of Mubert, one of the startups we talked about earlier.
Uses for AI music: New songs for old artists
The final use for AI music we’re going to talk about in this piece is new songs for old – and sometimes dead! – artists. This is the idea of training an AI on the catalogue of an artist, then seeing what new music it comes up with.
This approach was one of the first public demos of AI music to get widespread attention in 2016, when Sony Computer Science Laboratories released a song called Daddy’s Car.
It was created by a system called Flow Machines, which Sony CSR trained on the back catalogue of The Beatles, in order to see if it could compose a song that was sufficiently Beatles-y. What do you think?

It’s important to understand that this was not purely the work of AI: the output of Flow Machines was subsequently arranged and produced by Benoit Carré, a human musician.
He’s another important figure in the modern era of AI music. In 2019, Carré also released an EP called ‘American Folk Songs’, which took a cappella recordings by legendary US folk singers Pete and Peggy Seeger and Horton Barker, and used an AI system to create music for them.
Another project that sparked lots of headlines was 2021’s ‘Lost Tapes of the 27 Club’. Here, an AI system was trained on songs by Nirvana, Amy Winehouse, Jim Morrison and Jimi Hendrix, then set to create ‘new’ music based on that.
Here, again, humans played an important role in sorting through the output of the AI, and deciding which bits of music could be combined into new songs. In the case of ‘Drowned In The Sun’, the new Nirvana track, a tribute-band singer was brought in to sing the vocals.
The project’s title comes from the fact that all the artists had died aged 27, with the creators using it to raise awareness around the issue of mental health in the music industry.
Finally, you don’t have to be dead to have an AI trained on your catalogue create new music for you. 2020’s Travisbott project was the work of a digital agency called Space150, which trained an AI on music and lyrics by US rapper Travis Scott.
The result was a track called ‘Jack Park Canny Dope Man’, complete with an eerie deepfake music video. This wasn’t a commercial project, nor was it a marketing wheeze for Scott himself: it was more an exploration of what this technology is capable of.

Big Tech and AI music
So far, we’ve talked a lot about startups and artists working with AI music. However, some of the biggest technology companies are also involved in this space.
The Skyyge project we talked about earlier on – the one reimagining American folk songs – was actually created using tools developed at Spotify’s in-house laboratory.
In 2017, the streaming service hired a man named François Pachet to lead a lab building creator tools. Pachet was previously the director of Sony CSL in Paris – yes, the team behind that Beatles-y ‘Daddy’s Car’ song – and one of the most prominent experts in AI music in the world.
A few reports at the time jumped to the conclusion that Pachet would be helping Spotify to create a catalogue of AI-generated music to pump into its mood-music playlists, but the company has always maintained that the hire was about building tools for artists to use.
Forbes recently offered an update on that, suggesting that these tools might also be usable by music fans. “One tool will let you tweak a song’s rhythm or melody. Another can take the harmony of a pop song from, say, Justin Bieber or Drake and combine it with the melody and rhythm of a fugue by Schubert or Bach, if that’s your thing.”
Watch Apple closely on this front too. In February 2022, the company acquired AI Music, the British startup we talked about earlier in this article, in connection with its Venturesonic spin-off.
AI Music’s original mission was building an AI capable of adapting existing music – or ‘shapeshifting’ it as the company described it. It wanted to be able to adapt songs into different styles, genres and even keys to suit listeners’ needs – for example, turning a track into a deep-house stomper for the gym, or a jazzy chillout track for late-night listening.
Apple often buys startups, and it almost never talks about its plans for them. It could use AI Music’s tech within Apple Music, or its Apple Fitness+ workouts service for example.
It could use the tech to add new assistive tools to its GarageBand music-making software, or deploy it in the iMovie or Final Cut Pro products to help people adapt music to their video content. We don’t know yet: but Apple buying in a team of AI music experts is certainly intriguing.
It’s at this point that we’ll remind you that the first notable AI music startup, Jukedeck, was acquired by TikTok’s parent company ByteDance in 2019. Jukedeck’s CEO became the product director of TikTok’s in-house AI Lab, although he has since moved on (including penning a recent guest column for Music Ally about a different topic).
TikTok’s growth has been driven by commercial music, and as it has added licensing relationships with labels and publishers, it has become an important source of revenues for them, not just promotion.
However, there is clearly potential for AI-generated music to play a role for TikTok too. Imagine a button that you could press while making a video to create an instant, original, AI-generated soundtrack…

Google is another big technology company that has explored AI music, although our sense is that its interest is mainly experimental: it doesn’t see a business here, but does see music as a good way of testing the capabilities of its more general AI platforms.
In 2016, Google announced Magenta, a project exploring whether machine learning technology – one of the subsets of AI – can be used to create compelling art and music.
Projects built with it have included 2017’s AI Duet, which was an AI piano player who responds to what you play as a human, and Nsynth, essentially an AI-powered software-based synthesizer, capable of creating brand new sounds.
In 2019, Google devoted one of its famous Google Doodles on its search homepage to AI music, with a tool that got visitors to compose a melody, then used AI to harmonise it into Johann Sebastian Bach’s signature style.
Then there was Lo-Fi Player in 2020, an AI-generated stream of lo-fi hip-hop music, of the kind that has proved so popular on YouTube. Built using Magenta, it let people interact with the music by clicking on different objects on screen.
The same year, Google also launched Blob Opera, a set of cute, blob-shaped opera singers powered by an AI trained on human singers.
All of these were fun, innovative demos, but Magenta’s greater value is that it can be used by anyone building their own AI music systems, be they developers or musicians. Google has built the baseline tools for a whole world of AI music possibilities, in other words.

Two more big tech companies to tell you about in the context of AI music. First, there is OpenAI, a non-profit AI company backed by Tesla founder Elon Musk. It’s best known for its GPT-3 deep-learning system for writing text. However, the company has also worked on AI music.
In 2019, it released MuseNet, a neural network that could generate four-minute compositions with 10 different instruments – boasting that it could “combine styles from country to Mozart to the Beatles.
Then, in 2020 it released Jukebox, another neural network that could generate music AND vocals. The company said it could create original music; rewrite existing tracks; take a 12-second snatch of music and complete it into a full song, and create deepfake cover versions in the style of Elvis Presley, Frank Sinatra, Katy Perry and other artists.
As with Google, OpenAI is less about building a business in AI music itself, and more about creating tools that other developers and artists can use in their projects.
Finally, Amazon has also dabbled in AI music, with its 2019 project DeepComposer. This, too, was made for developers: a physical keyboard that came with sample code and training data to help people explore generative music. So this was more of an educational project aimed at machine-learning students than any kind of business related to Amazon’s music services.

Key talking points around AI music
We’ve talked about some of the main use cases for AI music; we’ve introduced you to the key startups operating in this space; and we’ve explored what some of the biggest technology companies are doing.
We’ll finish off by running through some of the important talking points around AI music and its impact on human musicians and the wider music industry.
One of the thorniest debates is about whether an AI can legally be the author of a piece of music. We mentioned that one of those startups, AIVA, has been accepted as an author by collecting society Sacem. But elsewhere in the world, this is a controversial question.
In February 2022, the US Copyright Office rejected an appeal by inventor Steven Thaler against a decision that his Creativity Machine AI could not copyright its work – visual art in this case.
The office’s view is that works produced by a machine or mere mechanical process without any creative input from a human author cannot be copyrighted.
In the context of commercial AI music, this means the legal author will either be the creator of the AI, or the person using that AI to create music.
Another copyright debate around AI music concerns infringement. Or rather, if you train a musical AI on a catalogue of copyrighted music, are you infringing that copyright? The process usually involves making a copy of that catalogue, so the obvious answer may be yes.
However, as Sophie Goossens, partner at law firm Reed Smith explained to us in 2019 in an interview, it’s not as simple as that. In the US, training an AI on copyrighted music is considered fair use, based on a number of past court rulings.
The same is true in countries like Japan, Singapore and China – all emerging players in AI technology more widely. However, in Europe the situation is different. Recent copyright legislation allowed rightsholders to reserve their rights concerning text and data mining of their content – be that music, text, video or other materials.
This created an opportunity for them to license catalogues of training material to AI startups in Europe, although the alternative take is that startups would simply open an office elsewhere in the world, with a more favourable legal climate for such training.
This is an ongoing debate. Trance artist Brian Transeau – known as BT – talked about this in a 2021 interview with Wired. He questioned whether it was right that companies could “take someone’s work and train models with it”, and suggested that they should be “speaking to the artists themselves first”.
Transeau’s view was that there needed to be “protective mechanisms” in place to protect musicians, visual artists, programmers and anyone else whose creative work might be used to train an AI.
That would be a stick approach, but you can see this in terms of carrots too. There are lots of incentives for AI music startups to see human artists as partners to co-create with, rather than simply content fodder for training systems.
As we published this article, British music industry body UK Music – with the support of its various member bodies representing labels, publishers and musicians – was protesting at plans by the UK government that it feared would allow music to be ‘data mined’ by AI companies without needing permission from its creators and rightsholders.
We’ve talked about how startups like Endel and Aimi have worked with artists, and about how artists like Holly Herndon and Yacht have built their own systems. This human-AI collaboration is something that could and should be encouraged.
The more artists get their hands dirty with this technology, the more able they will be to shape its development, as well as the business models around it.
This applies to the wider music industry too. Labels and publishers have been somewhat shy – publicly at least – of getting involved with or investing in AI music startups. Why? One reason was concern about what their human artists would make of such moves.
However, just like artists, music rightsholders can help to shape this technology and how it evolves by engaging with startups and developers building these systems.
Warner Music Group’s investment in LifeScore in March 2022 was an encouraging sign on that front. Labels have emerged as active investors in music/tech startups in recent years, and AI music is certainly a sector worth engaging with.
In the latter part of 2021 and the early part of 2022, it was clear that funding for AI music startups was stepping up a level. Splash and Aimi both raised $20m funding rounds, while LifeScore’s round was $14.4m.
We expect to see more of these eight-digit funding rounds for AI music in 2022 and 2023, as this technology moves from experimental demos to real businesses.
The biggest question around AI music has always been this: will it harm human musicians? Is it an existential threat to their livelihoods? There is no simple answer.
There are undoubtedly areas where AI-made music will compete with human-made music. Production music is one. Hans Zimmer doesn’t have to worry about being beaten to big film scores by AI rivals, but when it comes to quick, cheap soundtracks for social videos or corporate training films… there may be some displacement.
Mood music is another area: if someone is looking for a stream of music to work, study, relax or sleep to, it is absolutely feasible that it could come from an algorithm rather than humans. That said, startups like Endel and Aimi who are working in this area seem keen to partner with human artists rather than dislodge them.
Perhaps the more positive question is this: how will creative AIs help humans to be more creative? These systems can nudge musicians out of writer’s block, or spark ideas out of their musical comfort zone. They might be the next iteration of instruments like drum machines and synthesizers, where it is the creativity and musical talent of the human artist that makes the most of the tool.
It’s also important to think about humans who are non-musicians – or perhaps more accurately, who aren’t musicians yet. BandLab, Splash and Boomy are all examples of how creative AIs can open up music-making to humans who don’t necessarily have a musical training or instrumental proficiency.
They might use that music to express themselves on social networks. It may be the stepping stone into making music more seriously. They may just do it for fun – AI music as a tool for enhancing mental health is a trend with much to explore.
The single key takeaway from all this may be: whatever AI music can become, and whatever it can enable humans to do, is something that the music industry can only understand by leaning into the technology: playing with these systems, talking to these startups, and exploring what they’re capable of now, and helping to shape how they evolve in the future.