‘Text-to-image’ AIs like DALL·E are all the rage this year, whipping up images based on people’s text prompts and spurring an energetic debate about the cultural implications.
Now Meta has unveiled the logical next step for this technology: a text-to-video creative AI called Make-A-Video. You can read about it here and see the videos the AI has generated based on prompts like ‘a dog wearing a superhero cape flying through the sky’; ‘a teddy bear painting a portrait’; and ‘a fluffy baby sloth with an orange knitted hat trying to figure out a laptop close up highly detailed studio lighting screen reflecting in its eye’. What times we live in, eh?
But we think people in the music industry should be following this technology closely, and thinking about two questions. First, how might music artists and their teams be able to use text-to-video AIs in creative, interesting ways? Second, what will it mean when fans can also use this technology? What might they want to create based on their favourite (or, indeed, their least favourite) artists, and what are the implications of that?
Plenty to chew on already.