“We had this thing we were shooting for, which was: can you separate Donovan?”
Don’t worry, this is not a bleak tale of 1960s singer-songwriters being torn limb-from-limb in the pursuit of technological progress.
Jessica Powell, CEO of Audioshake, is telling Music Ally the origin story of her startup and its AI-powered tool for separating music recordings into their component stems.
Donovan’s ‘Season of The Witch’ is, as it turns out, a tough nut to crack for an audio-separation AI. “We just could not get that right! We could not separate the vocal properly,” says Powell.
The former Googler – she was VP of communications for the tech giant – co-founded Audioshake with Luke Miner (formerly head of data science at fintech company Plaid) spurred by their shared love of karaoke while living in Japan, and their equally-shared boredom of the available catalogue of karaoke tracks.
“It was a limited repertoire and they’re covers. We really wanted to be able to karaoke to old punk and hip-hop songs that weren’t in the catalogue!” says Powell. “What if you could karaoke to every song in the world? And all the real songs, not the covers?”
An early attempt at separating a song by The Smiths (“It sounded terrible! Demonic!” kicked off an 18-month development period from 2020, with Audioshake’s technology improving rapidly – helped by publisher peermusic, which offered constructive criticism and encouragement from an early stage.
“They told us that they’d been pitched this stuff before and it didn’t sound very good. ‘It’s super-interesting and maybe one day someone will get it right, but it’s not good enough to use for something like sync’,” says Powell.
“When they listened to our first set of songs, they told us this was the best they’d had so far: ‘You have a way to go but this is very promising’. And then word started to spread…”
What is Audioshake doing and why is it interesting?
In a nutshell, Audioshake’s AI takes a song recording and breaks it up into stems, even if it was not a multi-track recording in the first place.
The mention of sync above points to one of the main potential uses for this technology: to create stems from older catalogue tracks that can be used in sync deals that require adaptation of a piece of music.
“Sync departments told us they lost 30% to 50% of the opportunities because they didn’t have instrumentals. Think of all that catalogue and all those opportunities that those musicians were missing out on because they couldn’t pull their song apart,” says Powell.
Sync is not the only use for Audioshake’s technology however. The company sees it as a tool for creating remixes and mash-ups, including for buzzy platforms like TikTok: one label’s A&R team used it to create stems and give them to young producers to create clips specifically for TikTok.
Spatial audio is another use: labels needing tracks as stems to create new versions to respond to the demand from streaming services like Apple Music and Amazon Music. And stems may also be highly useful in emerging areas like gaming too.
In all these cases, Audioshake’s technology might be used because the original recordings were not multi-tracked or because the masters have been lost. But even when that’s not the case, it might be a cheaper, faster alternative to digging out those masters to create stems.
How does Audioshake’s business model work?
Audioshake launched commercially in 2021, initially focusing on labels and publishers with a subscription-based service. It has since been used by all three major labels; publishing companies like Hipgnosis, Primary Wave, Spirit, peermusic and Downtown; and distributors including CD Baby.
However, after raising $2m of seed funding in October 2021, the company has been working on plans to widen its customer funnel considerably. Those came to fruition this week with the launch of Audioshake Indie, a version of its service for independent artists and producers.
They can upload recordings to Audioshake, have the system break it down into stems, and then listen to the results before deciding whether to pay – a la carte, or through subscription.
“From the very start, we had a ton of incoming all the time from independent artists wanting to get their stems. We would try to serve them over email, as it’s important to us that you always get to hear the separation before you purchase anything,” says Powell.
“It was very time-consuming creating these stems then sending them back, so we created a platform for indie artists and producers and smaller labels that, because it’s self-serve, can be much cheaper and faster for them.”
“It was painful to launch a service, and see the enthusiasm for it, and to feel like we weren’t doing a good job serving all artists. Independent artists aren’t necessarily on the same budget as larger entities in terms of the tools that they can afford,” she continues.
“You want them to be able to have the same shots at sync opportunities, and sync is the number one thing we have independent artists and producers coming to us right now for.”
Audioshake Indie’s website already has a collection of case studies of how independent artists and labels are using its technology. Houston Kendrick ran a remix contest; Thuy created instrumentals for sync licensing; and Jaxxtone recovered projects he thought had been lost.
Christian label Dreamin’ Out Loud Entertainment reworked an old track, but since the original singer – a nun called Sister Genetter Bradley – was in ill health, it used Audioshake to pull her vocal from the original to use on the new version.
Most intriguingly, AI music collective Dadabots used Audioshake to create stems from the music made by their neural networks – which only generates audio in mono. One AI separating the music made by another AI into stems is quite the thought, although Powell says she doesn’t want to oversell this use for Audioshake’s technology just yet.
Should artists be worried about this technology?
One important point about Audioshake is its desire to not have its technology used for copyright infringement. That’s why it offered its tool first to music rightsholders, and it’s also why it asks Audioshake Indie users to confirm they have the rights to turn the music they are uploading into stems.
“We wanted to go to the industry, rather than just throw a plug-in out there that could have anyone just pull anything apart,” says Powell.
Those tools already exist, too. There is logic in taking a different path when trying to build a business out of audio-separation technology: one that helps musicians and music companies seize opportunities, rather than worry about unlicensed sampling, remixing and other uses of their work.
Unsurprisingly, Audioshake has been thinking carefully about these issues, and the balancing act between creative derivative works, and copyright / creator compensation.
“I would love for us to one day be helping contribute in a very positive way to remix culture. I think it’s really great for extending the life of a song; an incredible tool for fan engagement; and a really wonderful way for other artists to pay homage to the original art,” says Powell.
“I don’t think the ecosystem right now works very well for the artists or for remixers. The artists don’t get paid for the lion’s share of remixes: they’re not detected or claimed. And the remixers don’t have their share in that song either.”
“I’m cautiously optimistic that we’re going to be in a better place. I think everything’s going to be remixed and mashed up, but I think it’s important when that happens that artist are compensated for that: a way that the original artist is getting to share in that song’s success, and the remixer too.”
What is the bigger picture that Audioshake sees?
Platforms like TikTok are also playing a role in where Powell sees remix culture going: for example its Duets feature, where people can layer their singing or instrument playing onto other people’s videos.
“We genuinely believe that so many future music experiences are going to be built on this idea that all content, not just music, is going to be atomised,” she says.
“Whether it’s TikTok or whatever the next TikTok is, I believe as users we’re going to be able to manipulate audio and play with it, in the same way that we do with images and video nowadays… Why wouldn’t that happen with music? It’s such a visceral medium.”
“Content is going to get mashed up, it’s going to get pulled apart, and that is going to drive all different kinds of engagement. Plus there’s anything immersive, and gaming and fitness: things that are adaptive and dynamic in their use of music, that can be built on stems.”
But as she said earlier, Powell sees all this as opportunities for artists to take if they want to: Audioshake as a tool for them to pull their own music apart and send those stems out into the world, rather than a tool for this to happen without their permission.
She cites Green Day as one example so far. The band recently celebrated the 30th anniversary of their ‘Kerplunk’ album – for which they don’t have the original masters any more – by using Audioshake to create a guitar-less version of ‘2000 Light Years Away’ and release it on TikTok, encouraging fans to use it to record themselves playing the guitar part.
@greenday🚨Calling all guitar players🚨 This audio doesn’t have guitar- use it to record yourself shredding along to 2000 Light Years Away!! 🤘🏼♬ original sound – Green Day
“It’s pretty exciting for artists – IF they want those opportunities,” says Powell, before stressing that she understands and respects the views of musicians who prefer their work to remain un-pulled-apart.