Deepfake Detectors

From SI410
Revision as of 12:38, 19 March 2021 by Tsoumya (Talk | contribs) (New Section: Ethical Implications of Deepfakes in Democratic Processes)

Jump to: navigation, search

Introduction

To understand Deepfake detectors you first need to understand Deepfakes (you can read more about them here). Deepfakes are essentially a technology that uses artifical intelligence to basically swap someones face onto anothers in a video. To create a deepfake, you first have to have someone else act out what you would like the person that you are making the deepfake of to be doing. You can then copy their face and body and voice onto the person in the video. In the beginning, many videos and photos of the person you wanted to create a deepfake of were required for the technology to work. As it got more popular the technologies started needing less and less data to create a deepfake of someone. Now, some technologies only require a single picture.

The Rise of Deepfakes

Deepfakes started becoming something more and more people were able to make. While this technology was initially created to be able to use actors in a sequel of a movie if they had died before being able to film it, it soon was being used for more sinister purposes. People began making fake pornography of celebrities and notable people. Politicians were made to say politically incorrect things. Notable politicians began to become worried for their campaigns and credibility. This quickly became a problem and many companies started to ban deepfakes, like Facebook and Twitter. Deepfakes are still not very common so detection is more important for the future where deepfakes will very likely become a big issue.

Ethical Implications of Deepfakes in Democratic Processes

The mass propagation of deepfakes cause the distortion of democratic discourse and eroding of trust in institutions, both of which are highly relevant to democratic elections. Deepfakes are a form of disinformation; however, the law with regard to campaign speech is not very helpful in addressing the threat of deep fakes because election law is shaped by a compelling concern for the protection of first amendment rights[1]. This is especially true when it comes to deepfake parodies for they are generally seen as a form of free expression. The law favors false campaign speech over violations of free speech for fear that regulating campaign speech would become political[2]. Despite the law, however, the harm that deepfakes cause in undermining trust in electoral outcomes stands. Deepfakes along with other kinds of false speech distort campaign results and threaten public trust in those results.

For instance, deepfakes are used to deter voters from voting by means of a threat to release deepfaked pornographic images. Even if it was unclear how many people had been deterred, once voters become aware of the tactic, trust in the integrity of election results may be eroded[3]. Another scenario is when a deepfake is used to instill mistrust by falsely suggesting that a candidate cheated in a public debate, thereby calling into question the legitimacy of the political process. Overall, reputational harm and misattribution distort how voters perceive and understand candidates and, even if an individual viewer is aware of the manipulations, he or she may believe that others are not, which could further degrade their trust in the ability of others to make well-informed voting decisions[4]. With new uncertainties injected into the question of whether voters think their peers are well-informed, trust in democratic processes is undermined.

Deepfake Detectors

As deepfakes started becoming more prominent and more people started to be able to make them, it became increasingly important to figure out a way to detect if a video or image was a deepfake or not. Microsoft starting using a video authenticator technology that finds things like greyscale pixels at the boundaries of where the deepfaked face was pasted onto the original face. Microsoft ran a lot of data (including some from a Facebook face-swap database) to create the video authenticator. Some technologies use what researchers call a softbiometric signature, which is basically the mannerisms and speech patterns of a certain person, to show that a video is not actually that person. It is really hard to mimic the mannerisms and facial expressions of a person digitally so using the softbiometric signature is very accurate right now with a detection rate of about 92%. Deepfake detection is still fairly new and as deepfakes rise in popularity the need for accurate deepfake detectors will rise as well.



References

  • Hasen RL (2019) Deep Fakes, Bots, and Siloed Justices: American Election Law in a Post-Truth World. St. Louis University Law Review.
  • Marshall WP (2004) False Campaign Speech and the First Amendment. U. Pennsylvania Law Review 153.
  • Daniels GR (2009) Voter Deception. Indiana Law Review 43.
  • Chesney R and Citron DK (2019) Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review 107.