Thanks to A.I., there is finally a way to spot ‘deepfake’ face swaps online


Thanks to A.I., there is finally a way to spot ‘deepfake’ face swaps online

face swap recognition algorithm swaps

The ability to use deep learning artificial intelligence to realistically superimpose one person’s face onto another person’s body sounds like good, wholesome fun. Unfortunately, it’s got a sinister side, too, as evidenced by phenomenon like the popularity of “deepfake” pornography starring assorted celebrities. It’s part of a wider concern about fake news and the ease with which cutting- edge tech can be used to fraudulent effect.

Researchers from Germany’s Technical University of Munich want to help, however — and they are turning to some of the same A.I. tools to help them in their fight. What they have developed is an algorithm called XceptionNet that quickly spots faked videos posted online. It could be used to identify misleading videos on the internet so that they could be removed when necessary. Or, at the very least, reveal to users when they have been manipulated in some way.

“Ideally, the goal would be to integrate our A.I. algorithms into a browser or social media plugin,” Matthias Niessner, a professor in the university’s Visual Computing Group, told Digital Trends. “Essentially, the algorithm [will run]in the background, and if it identifies an image or video as manipulated it would give the user a warning.”

The team started by training a deep-learning neural network with a dataset of more than 1,000 videos and 500,000 images. By showing the computer both the doctored and undoctored images, the machine learning tool was able to figure out the differences between the two — even in cases where this would be difficult to spot for a human.

“For compressed videos, our user study participants could not tell fakes apart from real data,” Niessner continued. On the other hand, the A.I. is able to easily distinguish between the two. Where humans were right 50 percent of the time, making it the equivalent of random guesses, the convolution neural network could get compressed videos right anywhere from 87 percent to 98 percent of the time. This is particularly impressive since compressed images and video are harder to distinguish than uncompressed pictures.

Compared to other fraudulent image-spotting algorithms, XceptionNet is way ahead of the curve. It’s another amazing illustration of the power of artificial intelligence and, in this case, of how it can be used for good.

A paper describing the work titled, “FaceForensics: A Large-scale Video Data Set for Forgery Detection in Human Faces,” is available to read online.

Editors’ Recommendations

Published at Thu, 12 Apr 2018 21:07:15 +0000


Comments are closed.