Boffins find a way to put your facial expression on Donald Trump's mug
Or any other talking head you choose: let a million memes bloom!
Video Boffins are racing towards the goal of being able to make anybody seem to say anything in video, and nobody will be able to tell whether it's real.
Researchers from Stanford, the Max Planck institute and the University of Erlangen-Nuremberg are showing off software that can project the movements on a face onto someone else, in real time.
“Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion”, the researchers write at their project page.
Since the researchers plan to present their work in June at the IEEE's Computer Vision and Pattern Recognition conference, they're light on detail, but the video's voiceover says Face2Face works with only the RGB data from both the source and the target.
The project is called Face2Face, and it's demonstrated in this YouTube video:
(Thanks, boffins, for using targets like George HW Bush and Arnold Schwarzenegger in the demonstrations. You've given this reporter nightmares, and you've almost certainly pre-seeded the inevitable primary use of Face2Face, animated GIF memes for social media. At least you kept Donald Trump back until two-thirds through the video.)
The authors are particularly proud of the way Face2Face re-renders not just the shape of the face, but the interior of the target's mouth, “based on photometric and temporal similarity”.
(The Register also notes the presence of the patent-vital keyword “novel” in the abstract, which is as much detail as the project has published so far.)
There's also a “new RGB face-tracking pipeline”, which the researchers compare to previous trackers, including FaceShift, which was launched in 2014.
All they need now is to re-render an actor's voice into the voice of George HW Bush or Arnie, and the world will take one step further towards what Philip K Dick called “the Universe of authentic fakes”. ®