2020 looks set to be the year when a new AI-music startup (or at least a research project) pops up every week. This week’s entry is DeepJams, which falls somewhere between the two. It’s a graduate project from UC Berkeley exploring ways to use machine intelligence to augment traditional music composition.
“We are applying the latest research in the field along with the latest machine learning toolkits and artificial neural networks to train models that can extend original human compositions with equally original machine generated extensions,” as the project’s website explains.
“Our group’s focus centers on creating useable models which exceed not only on quantitative dataset benchmarking exercises, but can also garner positive qualitative feedback via controlled user testing. To this end, we have created fully functional prototypes for every iteration of the academic effort, free for evaluation and use.” So, it’ll be an academic project, but also a “functional product with real use-cases and potentially licensable intellectual property”. You can find out more and hear its outputs on the website.
Photo by Samuel Ramos on Unsplash