DeepJams is latest project exploring original AI-generated music


2020 looks set to be the year when a new AI-music startup (or at least a research project) pops up every week. This week’s entry is DeepJams, which falls somewhere between the two. It’s a graduate project from UC Berkeley exploring ways to use machine intelligence to augment traditional music composition.

“We are applying the latest research in the field along with the latest machine learning toolkits and artificial neural networks to train models that can extend original human compositions with equally original machine generated extensions,” as the project’s website explains.

“Our group’s focus centers on creating useable models which exceed not only on quantitative dataset benchmarking exercises, but can also garner positive qualitative feedback via controlled user testing. To this end, we have created fully functional prototypes for every iteration of the academic effort, free for evaluation and use.” So, it’ll be an academic project, but also a “functional product with real use-cases and potentially licensable intellectual property”. You can find out more and hear its outputs on the website.


Photo by Samuel Ramos on Unsplash

Stuart Dredge

Sign up for Music Ally’s free weekly newsletter, The Knowledge – at-a-glance analysis of the modern music industry

Read More: News
One response
  • Peter says:

    I have to admit that this is one of the rare areas where I see technical development critical. I mean, music is so much more than just a “random” sequence of notes. These artificially created songs have no soul, no emotion. How can a song be good if it can be generated over and over again? Luckily, and I hope that it stays like that for as long as possible, all artificial composers I heard so far still have a long way to go.

Leave a Reply

(All fields required)