Deepfake Ethics

From SI410
Jump to: navigation, search

Deepfakes are a type of artificial production in which real images and videos are overlaid with other images to create a false production or visual [1]. The idea of creating fake images and videos is not a new phenomenon: internet users have been using software such as Photoshop to digitally alter images for years. However, the significance of the deepfake lies in the quality with which it is created. Advances in machine learning and Artificial Intelligence, including generative neural networks such as autoencoders as well as Generative Adversarial Networks (GANs) have allowed for highly realistic productions, many of which can be challenging to identify as artificial [1]. Deepfakes are created using two specific algorithms in conjunction: generators and discriminators. The generator’s role is to create variations of the fake content to be judged by the discriminator. The discriminator then tries to determine if the image can be identified as artificial, and if it can, sends this information back to the generator to improve upon in the next iteration of the deepfake [2]. This is the foundation of the Generative Adversarial Networks that are often used to create deepfakes, which have been heavily improved upon in recent years to create highly realistic artificial productions.

Backlash

Despite the potential for artistic and creative uses with this new technology, deepfakes have garnered significant backlash in recent years for the ability to produce, amongst other things, celebrity pornography, revenge pornography, fake news, bullying, defamation, and blackmail, all with the possibility for serious harm [1]. This has raised a variety of ethical concerns surrounding the usage and proliferation of deepfakes that are in some way harmful as well as questions surrounding ownership of a person's face and voice once they die.

Ethical Arguments

In discussing the ethical concerns of deepfakes in recent years, one of the first distributors of the technology stated, “I’ve given it a lot of thought, and ultimately I’ve decided I don’t think it’s right to condemn the technology itself – which can of course be used for many purposes, good and bad” [3]. This is a common argument in the technology ethics debate: technology is value-neutral; it is only the user that decides if it is good or bad. There are many people who sit in this camp defending the merits of deepfake technology, justifying it through the lens that technology in itself has no moral or ethical value, the user is the one who gives it value. However, this ethical argument falls short for many others - especially in today’s age of technological boom: there are far too many invasive technologies that carry their own ethical concerns to wave them off as value-neutral. We must moderate these technologies to allow them to fit into the ethical bounds that we desire for society [4].

However, many of the proponents of deepfakes argue that the potential benefits that the technology can bring about to our society far outweigh the potential downfalls that may or may not come with the territory. They see deepfake as a technology with the potential to revolutionize the arts, media, and educational spheres. They argue that deepfake technologies “hold the potential to integrate AI into our lives in a more humanizing and personal manner” [5]. It is not hard to imagine advertisements using deepfake videos of celebrities endorsing their products, rather than having to spend the time to create the real advertisement video themselves. Other proponents of deepfakes who are more aware of the potential downsides that it may bring still voice their opinion in favor of the technology with the argument that the technology to detect deepfakes will only progress over time, and will be able to keep malicious usages in check [5].

However, there are a variety of malicious usages of deepfakes that are already occurring today that can still cause irreversible damage even if identified as artificial. One such example of this, already previously discussed, is of deepfake revenge pornography. It is not difficult to imagine an ex-partner creating and distributing a humiliating video of their former partner and sharing it on the internet. According to the Prindle Institute, “this issue is incredibly pressing and might be more prevalent than the other potential harms of deepfakes. One study, for example, found that 96% of existing deepfakes take the form of pornography” [5]. There is no doubt that regardless of whether or not technology is able to reach a point where it can immediately identify deepfakes as fraudulent, such usages of deepfakes would be incredibly humiliating and damaging to a person’s reputation and self esteem. Without a way to moderate what can and cannot be created using deepfake technologies, the critics of deepfakes argue that there is no way that we will be able to safely use the technology.


Examples

Global Affairs

Deepfakes have numerous positive applications in media and arts, such as in the new Star Wars series which used deepfake to depict characters in their youth and even replace characters who had previously died. However, there are many other examples of serious significance that use deepfakes for malicious purposes, which is of prominent concern [2]. One such example is that on March 16, 2022, a deep fake video of the President of Ukraine, Volodymyr Zelensky, appearing to tell his soldiers to stop fighting and to surrender during the Russian invasion of Ukraine was being spread on social media [1]. After the video was revealed to be fake, Facebook and Youtube removed it, while Twitter indicated it as artificial and fake on their platform. However, as it was circulating, the Russian media boosted it and gave it more credence. This example would perhaps not be an isolated one: it is very possible to imagine deepfake images and videos playing a role in manipulating political and military conflicts that could have reverberating effects on millions of citizens. There is a serious need to regulate and identify deepfake productions to mitigate malicious intervention in global affairs.

Politics

Another concern of deepfakes’ critics is of the damage that these artificial videos can do even after they have been debunked: “on January 6, 2020, Representative Paul Gosar of Arizona tweeted a deepfake photo of Barack Obama with Iranian president Hassan Rouhani with the caption, ‘The world is a better place without these guys in power,’ presumably as a justification of the killing of Iranian General Soleimani” [4]. When approached about circulating a deepfake image to his large base of followers, Representative Gosar claimed that he never stated that the photo was real. And yet, despite the fact that this image was debunked as artificial, it likely still had its intended effect of slandering Barack Obama. Research has shown that such gaslighting could in fact be effective at swaying people’s perceptions. Research indicates that the strength of political campaigns is not necessarily in persuading voters to change their views, but rather, they are very effective at reinforcing voters’ current political views and encouraging citizens to vote [4]. Critics of deepfake technology are therefore worried that individuals and politicians with influence would be able to share artificial images to their follower base that could sway opinions one way or the other. And despite the fact that the image may eventually be identified as artificial, the image would have likely already had its intended effect of swaying voters or hurting an opponent’s campaign. One can imagine the potential nefarious applications of deepfakes, particularly involving defamation in the same vein of Representative Paul Gosar’s tweet. It is not unreasonable to expect new laws to be passed regarding defamation with deepfake images. Such defamation would certainly be possible both within political campaigns and otherwise.

Blackmail

Additionally, the American Congressional Research Service has warned of the possibility of blackmail with deepfake images. The research group discussed the possibility of individuals blackmailing politicians with access to classified information and documents [1]. Blackmail would pose a serious threat to the safety of citizens globally and could pose a threat to major corporations, such as Google or Meta who have data stored on hundreds of millions of users, as well as governments and political entities. Both individual actors as well as foreign governments would be able to create these deepfakes and use them as leverage over another individual or group. Critics of deepfakes argue that it is imperative that we quickly regulate and place limitations on the creation and distribution of deepfakes for fear of such a blackmail attack.


Identifying Deepfakes

There have been some attempts to more easily and effectively identify deepfake images. At this point, it can be challenging to quickly identify deepfake images, and it would serve social media sites such as Twitter and Facebook well to be able to quickly identify deepfake images on their sites.

One approach to finding ways to detect deepfake images and videos has been to crowdsource solutions. One data oriented website called Kaggle hosted a competition for creating a machine learning or deep learning model that is able to successfully detect deepfake images. The prize for this competition was 1 million dollars, and the competition was set up by a variety of big tech companies including Amazon, Meta, and Microsoft as well as academic researchers. With over 2000 submissions, the winning submission achieved an accuracy score of 65% when discerning between real and fake media [6]. Many models examine heart rate, breathing, blinking patterns, facial expressions, and other subtle human movements to detect artificial media. The models essentially attempt to identify irregularities in these patterns to identify the “unnatural” humans [6]. Clearly, it is very difficult to discern fake images, even with the most capable artificial intelligence methods and technologies that researchers - both academic and casual - have access to.

Other groups have attempted to identify deepfakes through a combination of human perception and machine learning models. It was found that humans identifying deepfakes individually and running the deepfake images and videos through models individually each had mediocre performances. It was found that humans and the leading computer vision deepfake detection systems had very similar accuracy rates. However, humans and machine learning models in conjunction were able to identify deepfake images and videos at the highest rate. The human participants in this research had access to the model’s predictions and confidence levels - which are not always right, as previously noted - and it was found that this method had the highest accuracy rate of identifying the deepfake images. To train these models and assess these results, researchers used “the largest open-source dataset is the Deepfake Detection Challenge (DFDC) dataset, which consists of 23,654 original videos showing 960 consenting individuals and 104,500 corresponding deepfakes produced from the original videos [6].

With the moral ambiguity and potential for malicious action with deepfakes, it is of the utmost importance for researchers today to make strides in the area of identifying deepfake images and videos, which has already made some progress. This deepfake technology prompts a variety of moral and ethical questions, which then pose potential questions for researchers to answer, which they have begun to do as described above in recent years.


Consent

Another major and unprecedented ethical dilemma that arises with the rise of deepfakes is acknowledging consent when an individual’s voice or image is reproduced. In the past, when there was a video, image, or audio containing a person’s image or voice, it was implied that the individual consented to conducting the actions that they conducted in the image or video or consented to saying the words that they did in an audio. However, with deepfakes, it is not implied that individuals consented to conducting themselves as they did in this artificial reproduction. With how difficult it has been to identify deepfakes, as previously discussed in the above section, consent to reproduction becomes a potent ethical topic.

Many groups argue that no deepfake should be created without the explicit consent of the individual or individuals being imitated. Due to the realism of these artificial productions, the producer of the deepfake is in essence stealing an individual’s likeness without their explicit consent to the production [7]. Doing so would be a serious infringement on an individual’s right to own their own actions and speech. Due to the difficulty of getting an individual’s consent on a matter such as these, especially when considering the frequent attempt to create deepfakes of celebrities and politicians, it could be very difficult to create an ethical deepfake with full consent from all parties in the artificial recreation.

Ownership

The idea of consent leads to a similar dilemma of who owns the likeness and voice of an individual when creating a deepfake. An easy to imagine scenario could arise when an individual passes away and is no longer able to consent to their voice or image being used in a deepfake or similar artificial reproduction. It is then reasonable to wonder if the family of the deceased is then allowed to take ownership of their voice and image? Or should nobody be allowed to recreate their voice or image after death, as they do not technically own the voice or image, and the individual is not alive any more to consent to the creation of the deepfake?

In the film “Roadrunner: A Film About Anthony Bourdain,” created after the famous chef’s passing, the director Morgan Neville “commissioned a software company to create an AI version of Bourdain’s voice” [7]. There were only three lines that were artificially created, but it drew the ire of many fans and viewers, so much so that it became a trending topic on Twitter. After all, these individuals argued, how could Neville take it upon himself to dictate what Anthony Bourdain would speak if he were alive today? Neville does not own Bourdain’s voice and therefore has no right to recreate it through artificial deepfake methods [7]. According to these critics, it is therefore imperative that governments establish clear and defined laws regarding the ownership of voice and image in today’s age of deepfakes, or else this clear and present ethical dilemma will only snowball to more consequential and troubling issues.

These are all very reasonable questions that may arise and this is with only one particular scenario under consideration. The idea of ownership of the image and likeness in general is very ambiguous and hard to define.
  1. 1.0 1.1 1.2 1.3 1.4 Adee, S. (2022, August 18). What are deepfakes and how are they created? IEEE Spectrum. Retrieved February 9, 2023, from https://spectrum.ieee.org/what-is-deepfake
  2. 2.0 2.1 Team, G. L. (2022, November 21). All you need to know about Deepfake AI. Great Learning Blog: Free Resources what Matters to shape your Career! Retrieved January 24, 2023, from https://www.mygreatlearning.com/blog/all-you-need-to-know-about-deepfake-ai/
  3. Alter, A. (2018). Irresistible: The rise of addictive technology and the business of keeping Us hooked. Penguin Books.
  4. 4.0 4.1 4.2 Seattle University. (n.d.). Deepfakes and the value-neutrality thesis. Seattle University. Retrieved January 24, 2023, from https://www.seattleu.edu/ethics-and-technology/viewpoints/deepfakes-and-the-value-neutrality-thesis.html
  5. 5.0 5.1 5.2 Goodwine, K. (2022, December 14). Ethical considerations of Deepfakes. Prindle Institute. Retrieved February 9, 2023, from https://www.prindleinstitute.org/2020/12/ethical-considerations-of-deepfakes/
  6. 6.0 6.1 6.2 Deepfake detection by human crowds, machines, and machine ... - PNAS. (n.d.). Retrieved February 9, 2023, from https://www.pnas.org/doi/10.1073/pnas.2110013119
  7. 7.0 7.1 7.2 The ethical use of Voice Ai: The gains and pitfalls of Verbal Communication through Artificial Intelligence. Future Media Hubs. (2022, June 30). Retrieved February 9, 2023, from https://www.futuremediahubs.com/future-media-hubs/news/ethical-use-voice-ai-gains-and-pitfalls-verbal-communication-through