New Facebook AI software Can Help Detect Deepfake Origins
According to Facebook scientists, an AI software has been developed that would not only help them in “deepfake” image identification, but would also help them find out the source of the image.
Deepfakes refer to fake images, audios, and videos that are created using AI systems in such a way that it becomes difficult for people to identify them from the original ones. This activity has increased massively over the past few years, and many businesses, celebrities, and even politicians have become victim to it. Since these images and videos include false, misleading information, they can harm the dignity and reputation of people.
Xi Yin and Tal Hassner, Facebook research scientists, worked together with Michigan State University to create the software that would reverse engineers deepfake images and also figure out which AI was used to create it.
“Our method will facilitate deepfake detection and tracing in real-world settings, where the deepfake image itself is often the only information detectors have to work with,” the scientists mentioned a blog post.
“This work will give researchers and practitioners tools to better investigate incidents of coordinated disinformation using deepfakes, as well as open up new directions for future research,” they further said.
The software is trained to detect signs of image manipulation
The software basically works by training the system to detect imperfections that are left behind in the deepfake images when its created. According to scientists, these imperfections alter the digital ‘fingerprint’ of the image, and these serve as an evidence of image manipulation which is often undetectable by the naked eye.
“In digital photography, fingerprints are used to identify the digital camera used to produce an image,” the scientists said.
“Similar to device fingerprints, image fingerprints are unique patterns left on images… that can equally be used to identify the generative model that the image came from. Our research pushes the boundaries of understanding in deepfake detection,” they included.