Difference between revisions of "Deepfake Ethics"

From SI410
Jump to: navigation, search
Line 41: Line 41:
 
Blackmail
 
Blackmail
 
Additionally, the American Congressional Research Service has warned of the possibility of blackmail with deepfake images. The research group discussed the possibility of individuals blackmailing politicians with access to classified information and documents <ref name = Spectrum>
 
Additionally, the American Congressional Research Service has warned of the possibility of blackmail with deepfake images. The research group discussed the possibility of individuals blackmailing politicians with access to classified information and documents <ref name = Spectrum>
https://spectrum.ieee.org/what-is-deepfake</ref>. Blackmail would pose a serious threat to the safety of citizens globally and could pose a threat to major corporations, such as Google or Meta who have data stored on hundreds of millions of users, as well as governments and political entities. Both individual actors as well as foreign governments would be able to create these deepfakes and use them as leverage over another individual or group. Critics of deepfakes argue that it is imperative that we quickly regulate and place limitations on the creation and distribution of deepfakes for fear of such a blackmail attack.
+
Adee, S. (2022, August 18). What are deepfakes and how are they created? IEEE Spectrum. Retrieved February 9, 2023, from https://spectrum.ieee.org/what-is-deepfake  
 +
</ref>
 +
. Blackmail would pose a serious threat to the safety of citizens globally and could pose a threat to major corporations, such as Google or Meta who have data stored on hundreds of millions of users, as well as governments and political entities. Both individual actors as well as foreign governments would be able to create these deepfakes and use them as leverage over another individual or group. Critics of deepfakes argue that it is imperative that we quickly regulate and place limitations on the creation and distribution of deepfakes for fear of such a blackmail attack.
  
  

Revision as of 21:59, 9 February 2023

Deepfakes are a type of artificial production in which real images and videos are overlaid with other images to create a false production or visual [1]. The idea of creating fake images and videos is not a new phenomenon: internet users have been using software such as Photoshop to digitally alter images for years. However, the significance of the deepfake lies in the quality with which it is created. Advances in machine learning and Artificial Intelligence, including generative neural networks such as autoencoders as well as Generative Adversarial Networks (GANs) have allowed for highly realistic productions, many of which can be challenging to identify as artificial [1].

Deepfakes are created using two specific algorithms in conjunction: generators and discriminators. The generator’s role is to create variations of the fake content to be judged by the discriminator. The discriminator then tries to determine if the image can be identified as artificial, and if it can, sends this information back to the generator to improve upon in the next iteration of the deepfake [2]. This is the foundation of the Generative Adversarial Networks that are often used to create deepfakes, which have been heavily improved upon in recent years to create highly realistic artificial productions.

Backlash

Despite the potential for artistic and creative uses in this new technology, deepfakes have garnered significant backlash in recent years for the ability to produce, amongst other things, celebrity pornography, revenge pornography, fake news, bullying, defamation, and blackmail, all with the possibility for serious harm [1]. This has raised a variety of ethical concerns surrounding the usage and proliferation of deepfakes that are in some way harmful as well as questions surrounding ownership of a person's face and voice once they die.

Ethical Arguments

In discussing the ethical concerns of deepfakes in recent years, one of the first distributors of the technology stated, “I’ve given it a lot of thought, and ultimately I’ve decided I don’t think it’s right to condemn the technology itself – which can of course be used for many purposes, good and bad” [3]. This is a common argument in the technology ethics debate: technology is value-neutral; it is only the user that decides if it is good or bad. There are many people who sit in this camp defending the merits of deepfake technology, justifying it through the lens that technology in itself has no moral or ethical value, the user is the one who gives it value. However, this ethical argument falls short for many others - especially in today’s age of technological boom: there are far too many invasive technologies that carry their own ethical concerns to wave them off as value-neutral. We must moderate these technologies to allow them to fit into the ethical bounds that we desire for society [4].

Examples

Global Affairs

Deepfakes have numerous positive applications in media and arts, such as in the new Star Wars series which used deepfake to depict characters in their youth and even replace characters who had previously died. However, there are many other examples of serious significance that use deepfakes for malicious purposes, which is of prominent concern [2]. One such example is that “on March 16, 2022, a one-minute long deepfake video depicting Ukraine’s president Volodymyr Zelenskyy seemingly telling his soldiers to lay down their arms and surrender during the 2022 Russian invasion of Ukraine was circulating on social media” [1]. After the video was revealed to be fake, Facebook and Youtube removed it, while Twitter indicated it as artificial and fake on their platform. However, as it was circulating, the Russian media boosted it and gave it more credence. This example would perhaps not be an isolated one: it is very possible to imagine deepfake images and videos playing a role in manipulating political and military conflicts that could have reverberating effects on millions of citizens. There is a serious need to regulate and identify deepfake productions to mitigate malicious intervention in global affairs.

Politics

Another concern of deepfakes’ critics is of the damage that these artificial videos can do even after they have been debunked: “on January 6, 2020, Representative Paul Gosar of Arizona tweeted a deepfake photo of Barack Obama with Iranian president Hassan Rouhani with the caption, ‘The world is a better place without these guys in power,’ presumably as a justification of the killing of Iranian General Soleimani” [4]. When approached about circulating a deepfake image to his large base of followers, Representative Gosar claimed that he never stated that the photo was real. And yet, despite the fact that this image was debunked as artificial, it likely still had its intended effect of slandering Barack Obama. Research has shown that such gaslighting could in fact be effective at swaying people’s perceptions. Research indicates that the strength of political campaigns is not necessarily in persuading voters to change their views, but rather, they are very effective at reinforcing voters’ current political views and encouraging citizens to vote [4]. Critics of deepfake technology are therefore worried that individuals and politicians with influence would be able to share artificial images to their follower base that could sway opinions one way or the other. And despite the fact that the image may eventually be identified as artificial, the image would have likely already had its intended effect of swaying voters or hurting an opponent’s campaign. One can imagine the potential nefarious applications of deepfakes, particularly involving defamation in the same vein of Representative Paul Gosar’s tweet. It is not unreasonable to expect new laws to be passed regarding defamation with deepfake images. Such defamation would certainly be possible both within political campaigns and otherwise.

Blackmail

Blackmail Additionally, the American Congressional Research Service has warned of the possibility of blackmail with deepfake images. The research group discussed the possibility of individuals blackmailing politicians with access to classified information and documents [5] . Blackmail would pose a serious threat to the safety of citizens globally and could pose a threat to major corporations, such as Google or Meta who have data stored on hundreds of millions of users, as well as governments and political entities. Both individual actors as well as foreign governments would be able to create these deepfakes and use them as leverage over another individual or group. Critics of deepfakes argue that it is imperative that we quickly regulate and place limitations on the creation and distribution of deepfakes for fear of such a blackmail attack.


Identifying Deepfakes

There have been some attempts to more easily and effectively identify deepfake images. At this point, it can be challenging to quickly identify deepfake images, and it would serve social media sites such as Twitter and Facebook well to be able to quickly identify deepfake images on their sites.

Ownership

  1. 1.0 1.1 1.2 1.3 Wikimedia Foundation. (2023, January 20). Deepfake. Wikipedia. Retrieved January 24, 2023, from https://en.wikipedia.org/wiki/Deepfake
  2. 2.0 2.1 Team, G. L. (2022, November 21). All you need to know about Deepfake AI. Great Learning Blog: Free Resources what Matters to shape your Career! Retrieved January 24, 2023, from https://www.mygreatlearning.com/blog/all-you-need-to-know-about-deepfake-ai/
  3. Alter, A. (2018). Irresistible: The rise of addictive technology and the business of keeping Us hooked. Penguin Books.
  4. 4.0 4.1 4.2 Seattle University. (n.d.). Deepfakes and the value-neutrality thesis. Seattle University. Retrieved January 24, 2023, from https://www.seattleu.edu/ethics-and-technology/viewpoints/deepfakes-and-the-value-neutrality-thesis.html
  5. Adee, S. (2022, August 18). What are deepfakes and how are they created? IEEE Spectrum. Retrieved February 9, 2023, from https://spectrum.ieee.org/what-is-deepfake