Deepfake Detectors

From SI410
Revision as of 16:13, 12 March 2021 by Etzalel (Talk | contribs) (Created page with " == Introduction == To understand Deepfake detectors you first need to understand Deepfakes (you can read more about them here). Deepfakes are essentially a techn...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Introduction

To understand Deepfake detectors you first need to understand Deepfakes (you can read more about them here). Deepfakes are essentially a technology that uses artifical intelligence to basically swap someones face onto anothers in a video. To create a deepfake, you first have to have someone else act out what you would like the person that you are making the deepfake of to be doing. You can then copy their face and body and voice onto the person in the video. In the beginning, many videos and photos of the person you wanted to create a deepfake of were required for the technology to work. As it got more popular the technologies started needing less and less data to create a deepfake of someone. Now, some technologies only require a single picture.

The Rise of Deepfakes

Deepfakes started becoming something more and more people were able to make. While this technology was initially created to be able to use actors in a sequel of a movie if they had died before being able to film it, it soon was being used for more sinister purposes. People began to realize that you can make it look like someone is saying/doing anything you want them to. People were able to make celebrities look like they were doing vile things and politicians look like they were saying vile things. This quickly became a problem. Many companies started to ban deepfakes, like Facebook and Twitter.

Deepfake Detectors

As deepfakes started becoming more prominent and more people started to be able to make them, it became increasingly important to figure out a way to detect if a video or image was a deepfake or not. Microsoft starting using a video authenticator technology that finds things like greyscale pixels at the boundaries of where the deepfaked face was pasted onto the original face. Microsoft ran a lot of data (including some from a Facebook face-swap database) to create the video authenticator. Some technologies use what researchers call a softbiometric signature, which is basically the mannerisms and speech patterns of a certain person, to show that a video is not actually that person. It is really hard to mimic the mannerisms and facial expressions of a person digitally so using the softbiometric signature is very accurate right now with a detection rate of about 92%.



References