Deepfake Detectors

From SI410
Revision as of 16:20, 12 March 2021 by Etzalel (Talk | contribs)

Jump to: navigation, search

Introduction

To understand Deepfake detectors you first need to understand Deepfakes (you can read more about them here). Deepfakes are essentially a technology that uses artifical intelligence to basically swap someones face onto anothers in a video. To create a deepfake, you first have to have someone else act out what you would like the person that you are making the deepfake of to be doing. You can then copy their face and body and voice onto the person in the video. In the beginning, many videos and photos of the person you wanted to create a deepfake of were required for the technology to work. As it got more popular the technologies started needing less and less data to create a deepfake of someone. Now, some technologies only require a single picture.

The Rise of Deepfakes

Deepfakes started becoming something more and more people were able to make. While this technology was initially created to be able to use actors in a sequel of a movie if they had died before being able to film it, it soon was being used for more sinister purposes. People began making fake pornography of celebrities and notable people. Politicians were made to say politically incorrect things. Notable politicians began to become worried for their campaigns and credibility. This quickly became a problem and many companies started to ban deepfakes, like Facebook and Twitter. Deepfakes are still not very common so detection is more important for the future where deepfakes will very likely become a big issue.

Deepfake Detectors

As deepfakes started becoming more prominent and more people started to be able to make them, it became increasingly important to figure out a way to detect if a video or image was a deepfake or not. Microsoft starting using a video authenticator technology that finds things like greyscale pixels at the boundaries of where the deepfaked face was pasted onto the original face. Microsoft ran a lot of data (including some from a Facebook face-swap database) to create the video authenticator. Some technologies use what researchers call a softbiometric signature, which is basically the mannerisms and speech patterns of a certain person, to show that a video is not actually that person. It is really hard to mimic the mannerisms and facial expressions of a person digitally so using the softbiometric signature is very accurate right now with a detection rate of about 92%. Deepfake detection is still fairly new and as deepfakes rise in popularity the need for accurate deepfake detectors will rise as well.



References