If labels and managers in the music industry aren’t thinking about ‘deepfakes’ already, it’s an area that they should be looking into.
Reuters defines deepfake videos as “video that has been stripped of context, mislabeled, edited, staged or even deeply modified through CGI – at times for political or commercial gain”. For example, what seems to be a video of a celebrity or politician saying things that they simply didn’t say.
“Advances in AI-based technology mean it is now possible to create highly convincing videos showing real people – whether public figures or not – saying anything the creator desires, in any kind of setting,” explained Reuters in its latest investigation, which comes from the angle of trying to spot ‘fake news’ created using deepfake technology.
The full piece is well worth reading: Reuters created its own example, filming an interviewee in one language, another in a different language, then combining the sources (so the first interviewee seems to be saying the words of the second). The video illustrates how this technology could, for example, be used to put harmful words into artists’ mouths.
On the positive side, Reuters also noted that current technology has its flaws: “audio to video synchronisation issues; unusual mouth shape, particularly with sibilant sounds; a static subject”. So deepfake videos can be identified now, but as the technology improves, now’s the time to be aware of it for anyone working with public figures – musicians included.