Difference between revisions of "Main Page"

From SI410
Jump to: navigation, search
(rough draft)
Line 1: Line 1:
 
Deepfakes are a type of artificial production in which real images and videos are overlaid with other images to create a false production or visual (Wikimedia, 2023). The idea of creating fake images and videos is not a new phenomenon: internet users have been using software such as Photoshop to digitally alter images for years. However, the significance of the deepfake lies in the quality with which it is created. Advances in machine learning and Artificial Intelligence, including generative neural networks such as autoencoders as well as Generative Adversarial Networks (GANs) have allowed for highly realistic productions, many of which can be challenging to identify as artificial (Wikimedia, 2023).  
 
Deepfakes are a type of artificial production in which real images and videos are overlaid with other images to create a false production or visual (Wikimedia, 2023). The idea of creating fake images and videos is not a new phenomenon: internet users have been using software such as Photoshop to digitally alter images for years. However, the significance of the deepfake lies in the quality with which it is created. Advances in machine learning and Artificial Intelligence, including generative neural networks such as autoencoders as well as Generative Adversarial Networks (GANs) have allowed for highly realistic productions, many of which can be challenging to identify as artificial (Wikimedia, 2023).  
 +
 
Deepfakes are created using two specific algorithms in conjunction: generators and discriminators. The generator’s role is to create variations of the fake content to be judged by the discriminator. The discriminator then tries to determine if the image can be identified as artificial, and if it can, sends this information back to the generator to improve upon in the next iteration of the deepfake (Team, 2022). This is the foundation of the Generative Adversarial Networks that are often used to create deepfakes, which have been heavily improved upon in recent years to create highly realistic artificial productions.  
 
Deepfakes are created using two specific algorithms in conjunction: generators and discriminators. The generator’s role is to create variations of the fake content to be judged by the discriminator. The discriminator then tries to determine if the image can be identified as artificial, and if it can, sends this information back to the generator to improve upon in the next iteration of the deepfake (Team, 2022). This is the foundation of the Generative Adversarial Networks that are often used to create deepfakes, which have been heavily improved upon in recent years to create highly realistic artificial productions.  
 +
 
Despite the potential for artistic and creative uses in this new technology, deepfakes have garnered significant backlash in recent years for the ability to produce, amongst other things, celebrity pornography, revenge pornography, fake news, bullying, defamation, and blackmail,  all with the possibility for serious harm (Wikimedia, 2023). This has raised a variety of ethical concerns surrounding the usage and proliferation of deepfakes that are in some way harmful as well as questions surrounding ownership of a person's face and voice once they die.
 
Despite the potential for artistic and creative uses in this new technology, deepfakes have garnered significant backlash in recent years for the ability to produce, amongst other things, celebrity pornography, revenge pornography, fake news, bullying, defamation, and blackmail,  all with the possibility for serious harm (Wikimedia, 2023). This has raised a variety of ethical concerns surrounding the usage and proliferation of deepfakes that are in some way harmful as well as questions surrounding ownership of a person's face and voice once they die.
 +
 
In discussing the ethical concerns of deepfakes in recent years, one of the first distributors of the technology stated, “I’ve given it a lot of thought, and ultimately I’ve decided I don’t think it’s right to condemn the technology itself – which can of course be used for many purposes, good and bad” (Alter, A., 2018). This is a common argument in the technology ethics debate: technology is value-neutral; it is only the user that decides if it is good or bad. There are many people who sit in this camp defending the merits of deepfake technology, justifying it through the lens that technology in itself has no moral or ethical value, the user is the one who gives it value. However, this ethical argument falls short for many others - especially in today’s age of technological boom: there are far too many invasive technologies that carry their own ethical concerns to wave them off as value-neutral. We must moderate these technologies to allow them to fit into the ethical bounds that we desire for society (Seattle).
 
In discussing the ethical concerns of deepfakes in recent years, one of the first distributors of the technology stated, “I’ve given it a lot of thought, and ultimately I’ve decided I don’t think it’s right to condemn the technology itself – which can of course be used for many purposes, good and bad” (Alter, A., 2018). This is a common argument in the technology ethics debate: technology is value-neutral; it is only the user that decides if it is good or bad. There are many people who sit in this camp defending the merits of deepfake technology, justifying it through the lens that technology in itself has no moral or ethical value, the user is the one who gives it value. However, this ethical argument falls short for many others - especially in today’s age of technological boom: there are far too many invasive technologies that carry their own ethical concerns to wave them off as value-neutral. We must moderate these technologies to allow them to fit into the ethical bounds that we desire for society (Seattle).
 +
 
Deepfakes have numerous positive applications in media and arts, such as in the new Star Wars series which used deepfake to depict characters in their youth and even replace characters who had previously died. However, there are many other examples of serious significance that use deepfakes for malicious purposes, which is of prominent concern (Team, 2022). One such example is that “on March 16, 2022, a one-minute long deepfake video depicting Ukraine’s president Volodymyr Zelenskyy seemingly telling his soldiers to lay down their arms and surrender during the 2022 Russian invasion of Ukraine was circulating on social media” (Wikimedia, 2023). After the video was revealed to be fake, Facebook and Youtube removed it, while Twitter indicated it as artificial and fake on their platform. However, as it was circulating, the Russian media boosted it and gave it more credence. This example would perhaps not be an isolated one: it is very possible to imagine deepfake images and videos playing a role in manipulating political and military conflicts that could have reverberating effects on millions of citizens. There is a serious need to regulate and identify deepfake productions to mitigate malicious intervention in global affairs.
 
Deepfakes have numerous positive applications in media and arts, such as in the new Star Wars series which used deepfake to depict characters in their youth and even replace characters who had previously died. However, there are many other examples of serious significance that use deepfakes for malicious purposes, which is of prominent concern (Team, 2022). One such example is that “on March 16, 2022, a one-minute long deepfake video depicting Ukraine’s president Volodymyr Zelenskyy seemingly telling his soldiers to lay down their arms and surrender during the 2022 Russian invasion of Ukraine was circulating on social media” (Wikimedia, 2023). After the video was revealed to be fake, Facebook and Youtube removed it, while Twitter indicated it as artificial and fake on their platform. However, as it was circulating, the Russian media boosted it and gave it more credence. This example would perhaps not be an isolated one: it is very possible to imagine deepfake images and videos playing a role in manipulating political and military conflicts that could have reverberating effects on millions of citizens. There is a serious need to regulate and identify deepfake productions to mitigate malicious intervention in global affairs.
 +
 
Another concern of deepfakes’ critics is of the damage that these artificial videos can do even after they have been debunked: “on January 6, 2020, Representative Paul Gosar of Arizona tweeted a deepfake photo of Barack Obama with Iranian president Hassan Rouhani with the caption, ‘The world is a better place without these guys in power,’ presumably as a justification of the killing of Iranian General Soleimani” (Seattle). When approached about circulating a deepfake image to his large base of followers, Representative Gosar claimed that he never stated that the photo was real. And yet, despite the fact that this image was debunked as artificial, it likely still had its intended effect of slandering Barack Obama.  
 
Another concern of deepfakes’ critics is of the damage that these artificial videos can do even after they have been debunked: “on January 6, 2020, Representative Paul Gosar of Arizona tweeted a deepfake photo of Barack Obama with Iranian president Hassan Rouhani with the caption, ‘The world is a better place without these guys in power,’ presumably as a justification of the killing of Iranian General Soleimani” (Seattle). When approached about circulating a deepfake image to his large base of followers, Representative Gosar claimed that he never stated that the photo was real. And yet, despite the fact that this image was debunked as artificial, it likely still had its intended effect of slandering Barack Obama.  
 
Research has shown that such gaslighting could in fact be effective at swaying people’s perceptions. Research indicates that the strength of political campaigns is not necessarily in persuading voters to change their views, but rather, they are very effective at reinforcing voters’ current political views and encouraging citizens to vote (Seattle). Critics of deepfake technology are therefore worried that individuals and politicians with influence would be able to share artificial images to their follower base that could sway opinions one way or the other. And despite the fact that the image may eventually be identified as artificial, the image would have likely already had its intended effect of swaying voters or hurting an opponent’s campaign. One can imagine the potential nefarious applications of deepfakes, particularly involving defamation in the same vein of Representative Paul Gosar’s tweet. It is not unreasonable to expect new laws to be passed regarding defamation with deepfake images. Such defamation would certainly be possible both within political campaigns and otherwise.
 
Research has shown that such gaslighting could in fact be effective at swaying people’s perceptions. Research indicates that the strength of political campaigns is not necessarily in persuading voters to change their views, but rather, they are very effective at reinforcing voters’ current political views and encouraging citizens to vote (Seattle). Critics of deepfake technology are therefore worried that individuals and politicians with influence would be able to share artificial images to their follower base that could sway opinions one way or the other. And despite the fact that the image may eventually be identified as artificial, the image would have likely already had its intended effect of swaying voters or hurting an opponent’s campaign. One can imagine the potential nefarious applications of deepfakes, particularly involving defamation in the same vein of Representative Paul Gosar’s tweet. It is not unreasonable to expect new laws to be passed regarding defamation with deepfake images. Such defamation would certainly be possible both within political campaigns and otherwise.
 +
 
Additionally, the American Congressional Research Service has warned of the possibility of blackmail with deepfake images. The research group discussed the possibility of people blackmailing politicians with access to classified information and documents (Wikimedia, 2023).
 
Additionally, the American Congressional Research Service has warned of the possibility of blackmail with deepfake images. The research group discussed the possibility of people blackmailing politicians with access to classified information and documents (Wikimedia, 2023).
 
There have been some attempts to more easily and effectively identify deepfake images. At this point, it can be challenging to quickly identify deepfake images, and it would serve social media sites such as Twitter and Facebook well to be able to quickly identify deepfake images on their sites.
 
There have been some attempts to more easily and effectively identify deepfake images. At this point, it can be challenging to quickly identify deepfake images, and it would serve social media sites such as Twitter and Facebook well to be able to quickly identify deepfake images on their sites.
 +
 
Concerns about using a person’s face or voice after they die…
 
Concerns about using a person’s face or voice after they die…
 
Identifying deepfakes…
 
Identifying deepfakes…
 +
 
Deepfake images, although a powerful piece of technology with possibility for growth in entertainment and art industries, has serious ethical concerns with how it can be used including fake news, defamation, celebrity pornography, and blackmail.
 
Deepfake images, although a powerful piece of technology with possibility for growth in entertainment and art industries, has serious ethical concerns with how it can be used including fake news, defamation, celebrity pornography, and blackmail.
  

Revision as of 21:43, 25 January 2023

Deepfakes are a type of artificial production in which real images and videos are overlaid with other images to create a false production or visual (Wikimedia, 2023). The idea of creating fake images and videos is not a new phenomenon: internet users have been using software such as Photoshop to digitally alter images for years. However, the significance of the deepfake lies in the quality with which it is created. Advances in machine learning and Artificial Intelligence, including generative neural networks such as autoencoders as well as Generative Adversarial Networks (GANs) have allowed for highly realistic productions, many of which can be challenging to identify as artificial (Wikimedia, 2023).

Deepfakes are created using two specific algorithms in conjunction: generators and discriminators. The generator’s role is to create variations of the fake content to be judged by the discriminator. The discriminator then tries to determine if the image can be identified as artificial, and if it can, sends this information back to the generator to improve upon in the next iteration of the deepfake (Team, 2022). This is the foundation of the Generative Adversarial Networks that are often used to create deepfakes, which have been heavily improved upon in recent years to create highly realistic artificial productions.

Despite the potential for artistic and creative uses in this new technology, deepfakes have garnered significant backlash in recent years for the ability to produce, amongst other things, celebrity pornography, revenge pornography, fake news, bullying, defamation, and blackmail, all with the possibility for serious harm (Wikimedia, 2023). This has raised a variety of ethical concerns surrounding the usage and proliferation of deepfakes that are in some way harmful as well as questions surrounding ownership of a person's face and voice once they die.

In discussing the ethical concerns of deepfakes in recent years, one of the first distributors of the technology stated, “I’ve given it a lot of thought, and ultimately I’ve decided I don’t think it’s right to condemn the technology itself – which can of course be used for many purposes, good and bad” (Alter, A., 2018). This is a common argument in the technology ethics debate: technology is value-neutral; it is only the user that decides if it is good or bad. There are many people who sit in this camp defending the merits of deepfake technology, justifying it through the lens that technology in itself has no moral or ethical value, the user is the one who gives it value. However, this ethical argument falls short for many others - especially in today’s age of technological boom: there are far too many invasive technologies that carry their own ethical concerns to wave them off as value-neutral. We must moderate these technologies to allow them to fit into the ethical bounds that we desire for society (Seattle).

Deepfakes have numerous positive applications in media and arts, such as in the new Star Wars series which used deepfake to depict characters in their youth and even replace characters who had previously died. However, there are many other examples of serious significance that use deepfakes for malicious purposes, which is of prominent concern (Team, 2022). One such example is that “on March 16, 2022, a one-minute long deepfake video depicting Ukraine’s president Volodymyr Zelenskyy seemingly telling his soldiers to lay down their arms and surrender during the 2022 Russian invasion of Ukraine was circulating on social media” (Wikimedia, 2023). After the video was revealed to be fake, Facebook and Youtube removed it, while Twitter indicated it as artificial and fake on their platform. However, as it was circulating, the Russian media boosted it and gave it more credence. This example would perhaps not be an isolated one: it is very possible to imagine deepfake images and videos playing a role in manipulating political and military conflicts that could have reverberating effects on millions of citizens. There is a serious need to regulate and identify deepfake productions to mitigate malicious intervention in global affairs.

Another concern of deepfakes’ critics is of the damage that these artificial videos can do even after they have been debunked: “on January 6, 2020, Representative Paul Gosar of Arizona tweeted a deepfake photo of Barack Obama with Iranian president Hassan Rouhani with the caption, ‘The world is a better place without these guys in power,’ presumably as a justification of the killing of Iranian General Soleimani” (Seattle). When approached about circulating a deepfake image to his large base of followers, Representative Gosar claimed that he never stated that the photo was real. And yet, despite the fact that this image was debunked as artificial, it likely still had its intended effect of slandering Barack Obama. Research has shown that such gaslighting could in fact be effective at swaying people’s perceptions. Research indicates that the strength of political campaigns is not necessarily in persuading voters to change their views, but rather, they are very effective at reinforcing voters’ current political views and encouraging citizens to vote (Seattle). Critics of deepfake technology are therefore worried that individuals and politicians with influence would be able to share artificial images to their follower base that could sway opinions one way or the other. And despite the fact that the image may eventually be identified as artificial, the image would have likely already had its intended effect of swaying voters or hurting an opponent’s campaign. One can imagine the potential nefarious applications of deepfakes, particularly involving defamation in the same vein of Representative Paul Gosar’s tweet. It is not unreasonable to expect new laws to be passed regarding defamation with deepfake images. Such defamation would certainly be possible both within political campaigns and otherwise.

Additionally, the American Congressional Research Service has warned of the possibility of blackmail with deepfake images. The research group discussed the possibility of people blackmailing politicians with access to classified information and documents (Wikimedia, 2023). There have been some attempts to more easily and effectively identify deepfake images. At this point, it can be challenging to quickly identify deepfake images, and it would serve social media sites such as Twitter and Facebook well to be able to quickly identify deepfake images on their sites.

Concerns about using a person’s face or voice after they die… Identifying deepfakes…

Deepfake images, although a powerful piece of technology with possibility for growth in entertainment and art industries, has serious ethical concerns with how it can be used including fake news, defamation, celebrity pornography, and blackmail.







Sources

Alter, A. (2018). Irresistible: The rise of addictive technology and the business of keeping Us hooked. Penguin Books. Seattle University. (n.d.). Deepfakes and the value-neutrality thesis. Seattle University. Retrieved January 24, 2023, from https://www.seattleu.edu/ethics-and-technology/viewpoints/deepfakes-and-the-value-neutrality-thesis.html Team, G. L. (2022, November 21). All you need to know about Deepfake AI. Great Learning Blog: Free Resources what Matters to shape your Career! Retrieved January 24, 2023, from https://www.mygreatlearning.com/blog/all-you-need-to-know-about-deepfake-ai/ Wikimedia Foundation. (2023, January 20). Deepfake. Wikipedia. Retrieved January 24, 2023, from https://en.wikipedia.org/wiki/Deepfake