Last year, we wrote about a Google research project called Magenta, which is exploring the intersection of machine-learning / AI and creativity, music included.
Among its releases was a ‘neural synthesizer’ called NSynth, which used AI technology to “learn the characteristics of sounds, and then create a completely new sound based on these characteristics”.
Now Google’s team has taken that research to the next logical step, turning NSynth into a physical instrument called NSynth Super.
“To create our prototype, we recorded 16 original source sounds across a range of 15 pitches and fed them into the NSynth algorithm. The outputs, over 100,000 new sounds, were then loaded into NSynth Super to precompute the new sounds,” explained the company.
The prototype can be played via any MIDI source, from digital audio workstations (DAWs) to sequencers and keyboards.
The downside for musicians: NSynth Super isn’t an actual product they can buy: instead, its source code, schematics and design templates have been made available on an open source basis.
If you want one, you’ll have to build it.