Deepfake Detectors

From SI410
Revision as of 13:04, 19 March 2021 by Tsoumya (Talk | contribs) (Minor grammatical edits & word choice for better readability)

Jump to: navigation, search

Introduction

To understand Deepfake detectors you first need to understand Deepfakes (you can read more about them here).Deepfakes are audio or video fabrication or manipulation that are synthetically created or generated using deep learning algorithms. Deepfake content is created by using two competing AI algorithms — one is called the generator and the other is called the discriminator. The generator, which creates the phony multimedia content, asks the discriminator to determine whether the content is real or artificial. Together, the generator and discriminator form something called a generative adversarial network. Each time the discriminator accurately identifies content as being fabricated, it provides the generator with valuable information about how to improve the next deepfake. In the beginning, you required a large number of videos and photos of the person you wanted to create a deepfake of for the technology to work. As it got more popular the technologies started needing less data to create a deepfake of someone. Now, some technologies only require a single picture.

The Rise of Deepfakes

Deepfakes started becoming easier to create for more people. While this technology was initially created to be able to use actors in a sequel of a movie if they had died before being able to film it, it was soon being used for more sinister purposes. People began making fake pornography of celebrities and notable personalities. Politicians were made to make politically incorrect comments. Notable politicians began to worry about their campaigns and credibility. This quickly became a problem and many companies started to ban deepfakes, like Facebook and Twitter. Deepfakes are still uncommon so detection is more important for the future when deepfakes will very likely become a major issue.

Ethical Implications of Deepfakes in Democratic Processes

The mass propagation of deepfakes cause the distortion of democratic discourse and eroding of trust in institutions, both of which are highly relevant to democratic elections. Deepfakes are a form of disinformation; however, the law with regard to campaign speech is not very helpful in addressing the threat of deep fakes because election law is shaped by a compelling concern for the protection of first amendment rights[1]. This is especially true when it comes to deepfake parodies for they are generally seen as a form of free expression. The law favors false campaign speech over violations of free speech for fear that regulating campaign speech would become political[2]. Despite the law, however, the harm that deepfakes cause in undermining trust in electoral outcomes stands. Deepfakes along with other kinds of false speech distort campaign results and threaten public trust in those results.

Former president Trump retweeted this video of Nancy Pelosi, Speaker of the House of Representatives, in May of 2019; it had been altered to supposedly provide evidence of health problems.

For instance, deepfakes are used to deter voters from voting by means of a threat to release deepfaked pornographic images. Even if it was unclear how many people had been deterred, once voters become aware of the tactic, trust in the integrity of election results may be eroded[3]. Another scenario is when a deepfake is used to instill mistrust by falsely suggesting that a candidate cheated in a public debate, thereby calling into question the legitimacy of the political process. Overall, reputational harm and misattribution distort how voters perceive and understand candidates and, even if an individual viewer is aware of the manipulations, he or she may believe that others are not, which could further degrade their trust in the ability of others to make well-informed voting decisions[4]. With new uncertainties injected into the question of whether voters think their peers are well-informed, trust in democratic processes is undermined.

Deepfake Detectors

As deepfakes started becoming more prominent and more people started to be able to make them, it became increasingly important to figure out a way to detect a deepfake video or image. Microsoft starting using a video authenticator technology that finds things like greyscale pixels at the boundaries of where the deepfaked face was pasted onto the original face. Microsoft ran a lot of data (including some from a Facebook face-swap database) to create the video authenticator. Some technologies use what researchers call a softbiometric signature, which is basically the mannerisms and speech patterns of a certain person, to show that a video is not actually that person. It is really hard to mimic the mannerisms and facial expressions of a person digitally so using the softbiometric signature is very accurate right now with a detection rate of about 92%. Deepfake detection is still fairly new and as deepfakes rise in popularity the need for accurate deepfake detectors will rise as well.


References

  • Hasen RL (2019) Deep Fakes, Bots, and Siloed Justices: American Election Law in a Post-Truth World. St. Louis University Law Review.
  • Marshall WP (2004) False Campaign Speech and the First Amendment. U. Pennsylvania Law Review 153.
  • Daniels GR (2009) Voter Deception. Indiana Law Review 43.
  • Chesney R and Citron DK (2019) Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review 107.