SciTech

Beyond deep fakes: CMU method transfers style from one video to another

Credit: Anna Boyle/Art Editor Credit: Anna Boyle/Art Editor

From Bigfoot to King Kong, people have been faking photographs and videos for as long as the media have existed.

With advances in artificial intelligence and machine learning, creating fakes has become much easier in recent years. Researchers in the Robotics Institute here at Carnegie Mellon have developed a new algorithm to “retarget” videos: that is, they can take the content from one video and apply the style of another.

The team approached this problem using a class of algorithms called generative adversarial networks (GANs). This is an unsupervised method, which means that the computer can do it without human intervention. Classic GANs consists of two “competing” systems: one to generate an image, called the generator, and one to classify it as real or fake, called the discriminator. The goal of the generator is to fool the discriminator; the information from the discriminator then feeds back into the generator, helping it make better and better fakes.

This study, however, added another dimension to the algorithm. While previously only using spatial information, in this work the researchers combined both spatial and temporal information. Their new algorithm, which they call Recycle-GAN, makes the new videos smoother and more believable.

One inspiration for the work is the film industry. “I think there are a lot of stories to be told,” said Aayush Bansal, a PhD student and the study’s lead author.

Video retargeting has the potential to drastically reduce the cost of creating special effects in movies, or convert black-and-white films to color. It can also be applied to self-driving cars. The cars’ computers are trained to recognize their surroundings using labeled videos. While footage from well-lit scenes is easy to label, labeling videos of stormy or dark scenes is difficult. Recycle-GAN, however, can retarget these easily labeled fair-weather daytime scenes into nighttime scenes or storms, making it a useful training tool for difficult driving conditions.

You can find videos demonstrating the effectiveness of the Recycle-GAN team’s technique on their paper’s webpage. Each video shows side-by-side footage of the originals and the results. In the first, a clip of John Oliver is retargeted to the style of Stephen Colbert. The original video of Oliver is next to that of Colbert— their lips moving, heads bobbing, eyes blinking, and even hands gesturing at the same time. They have also applied their method to flowers blooming, clouds moving across the sky, and origami birds, in addition to celebrities and politicians.

Of course, in the era of fake news, this technology has led to concern over what are called deep fakes: altered videos that make people appear to do or say things that never actually happened.

The controversy began when videos containing celebrities’ faces superimposed onto those of actors in pornographic movies appeared online. Now, the fake videos are also becoming a concern for politicians. On Sept. 13, three members of Congress wrote a letter to the Director of National Intelligence, asking him look into the national security threat deep fakes could present. Citing concerns over blackmail and election meddling, the letter states that deep fakes could be “used by foreign or domestic actors to spread misinformation.” Videos like this can already be found online. Buzzfeed, along with comedian Jordan Peele, created a fake video of President Obama as a public service announcement on the dangers of deep fakes. The video, which includes Obama calling President Trump a “dipshit,” encourages vigilance when considering the authenticity of content from the internet.

Some fear that the reverse is also possible — public figures might deny doing or saying things caught on video, claiming those videos as fakes.

These issues have created a counter-field: a community of academics, national intelligence agencies and news organizations, who are working to differentiate between real videos and fakes. Some predict the beginning of an arms race, as both these fields become more sophisticated. While currently the average viewer can tell that the videos are a little “off," the technology will only improve with time, making fake videos more and more realistic.

If you are ever in doubt about the authenticity of a video you come across online, remember: you probably won’t catch the real Obama saying “stay woke bitches” anytime soon.