In January this year, Music Ally spotted a new artificial-intelligence music startup pop up on investment site Angellist.
Called AI Music, the London-based firm said it was “evolving music from a static, one-directional interaction to one of dynamic co-creation”, with its CEO Siavash Mahdavi having a background in 3D printing and an engineering doctorate in evolutionary robotics.
We were intrigued, but the company wasn’t quite ready to talk about its plans. Now it is, having been announced in April as one of two AI startups (Vochlea is the other) to be joining the Abbey Road Red incubator at Abbey Road Studios.
“AI has always been something that’s of interest to me. Even when I was 16, I started an AI society at school,” Mahdavi tells Music Ally.
“I’ve always been fascinated by the concept that we could automate, or intelligently do, what humans think is only theirs to do. We always look at creativity as the last bastion of humanity.”
“When everything gets overtaken, everything gets autonomous, what is there left for us to do? Are we all just going to play guitar and hang out or do sculptures while these drones are flying our food deliveries and we’re getting into autonomous Ubers?”
Mahdavi notes that a lot of development around AI thus far has focused on automation and industrialisation rather than the question of whether computers can be creative.
“I knew how to do startups and I had a phD in AI. But I’ve always been interested in music too: I play the piano. I thought: ‘Is there something that could be done that combines the two together, and maybe philosophically addresses this paradigm of creativity and artificial intelligence,” he says.
“Can the two meet? Or is it more around automation. So I started exploring quickly what you can do. Could you press a button and then write a symphony?”
That’s a similar origin story to Australian AI music startup Popgun, which Music Ally interviewed recently. It originally planned to “build an AI that’s going to have a top 40 hit” before plotting another path. AI Music, too, cooled on the idea.
“We thought about it, but not for too long. On the one hand it’s really difficult to do, but also I don’t know how useful it is. It’s very difficult to do, and I don’t know how useful it is,” says Mahdavi.
“Musicians are queuing up to have their music listened to: to get signed and to get on stage. The last thing they need is for this button to exist.”
So what is AI Music doing instead? Mahdavi describes the company’s technology as “almost like AI with stabilisers” and then later as “augmented intelligence”. It’s about using AI to adapt existing tracks, rather than to create music. It’s less of a Jukedeck or Amper Music, in AI startup terms, and more of a Weav Music.
“We’re not generating music from scratch. That’s explicitly not what we’re doing. We’re looking at using AI to shift the way in which music is consumed,” says Mahdavi. “Can a song that is sent your way interact with you in some way? We’re shape-changing the music.”

Examples? AI Music’s technology may work as subtly as shifting a track’s tempo 10BPM faster or slower to match someone’s walking or running pace. But it could have a much bigger impact in other contexts.
“It’s that idea of contextual AI. Maybe you listen to a song and in the morning it might be a little bit more of an acoustic version,” says Mahdavi.
“Maybe that same song when you play it as you’re about to go to the gym, it’s a deep-house or drum’n’bass version. And in the evening it’s a bit more jazzy. The song can actually shift itself. The entire genre can change, or the key it’s played in.”
Mahdavi says that the idea for all this ties back to a ride in a friend’s car last year, when they were playing a Tom Odell song on the stereo. He tagged the track using Shazam and added it to a Spotify playlist, then listened to it lots.
It was only six months later that he realised it was a remix of the track rather than the original version, which he realised he wasn’t as keen on.
“One of the inspirations behind what we’re doing is can we take something that happened accidentally like that, and make it happen on purpose? Can we create a remix for someone that connects with them and pulls them into the song?” he says.
“Later on they can explore the original version, the acoustic version, the rest of that album and so on. But by creating that hook initially by understanding the context they’re in? That could be interesting.”
AI Music is planning to launch a minimum viable product (MVP) of its technology by mid-October, when the next Abbey Road demo day happens. It sounds distinctly Tinder-like in its interface.
“You’ll take an existing song and essentially swipe left and right to hear different versions. ‘Let me try a deep house version’ and it just takes the content and generates for you on the fly a new remix of the song,” explains Mahdavi.
“We feel our early adopters are a bit more active in how they want to engage with the tool, but this will actually feed the AI so that we learn how people want to interact with it. Then, as we develop the product, if becomes something that can happen more passively: it will create the version that it thinks best suits you at that point in time.”
Suffice to say, this is all quite likely to anger – or at least elicit withering disdain – from humans who get paid to create, say, deep-house remixes of music tracks. Some musicians may also be concerned about yielding creative control in this way.
“They will be able to limit by how much a song is remixed. Some people may say maybe it can only shape-change in key by a couple of semitones up or down, and tempo by 10BPM left or right,” says Mahdavi.
“Someone else who’s maybe more of a playful artist may be okay with it being a free-for-all. ‘If they want to hear it as drum’n’bass or trap or deep house or a bit more jazzy, and if that helps people engage with my art and pulls them in to the original content and everything else I’m doing, then I’m happy with it’. They get to set those boundaries.”
Even with this control, there are some interesting legal questions around this kind of technology. For example, how it fits in to current definitions of copyright is one discussion. Is an AI Music ‘track’ actually 100 different remixes of a single recording, or a single recording with 100 (algorithmic) variations?
“There’s a whole legal and licensing challenge around all of this. How do you even monetise a shape-changing song? How do you track it? Will they use technologies like blockchain to look at the different constituent components, and how they form to create the whole version?” says Mahdavi.
“If an app allows you to shape-change a song to the extent that you can’t even hear the original, does it break away and become its own instance? If you stretch something to a point where you can’t recognise it, does that become yours, because you’ve added enough original content to it?”
“And how do you then measure the point at which it no longer belongs to the original? What we’re learning is a lot of this is really quite grey.”

Following the interview, Mahdavi offered more clarification on what this means for AI Music specifically.
“Clearly, we both understand and respect copyright and authors and will obtain permissions where we need to, but we want to reinvent the idea of a new type of personalisation beyond mere adaptation to creating new sounds,” he said.
AI Music is focusing on this idea of “personalised remixing” as the first commercial application for its technology, with a pitch to the music industry that it could help people “engage more with songs” just like he did with Tom Odell’s on that car ride.
For now, it’s less about business and more about experimentation though. “A lot of this is very philosophical and theoretical,” admits Mahdavi. “What if someone listens to a song ten times and they didn’t even realise it was slightly different every time? What effect does that have?”
“Or if you slightly change the key of a song to a key that best suits your mood, does that make you twice as likely to listen to it or not? These experiments have yet to be done, so that’s what we’re looking to do.”
Being part of Abbey Road Red will also, he hopes, get AI Music closer to the artists, producers and engineers, to help the company refine its technology. Although it can do its work with finished tracks, being able to work with the master stems will make life easier.
Mahdavi certainly isn’t short of big ideas though. “Imagine the same playlist but you could have it across 100 different moods. You could literally hear the same songs but as the chilled acoustic versions, or maybe the lyrics are removed because you just want to write something, and it’s difficult to hear people speaking when writing,” he says.
“We’re coming into this with what I call creative naivety. If you don’t know how something’s meant to happen, if you don’t know the box exists, you’re not going to think inside the box. You’re just off somewhere else.”