Difference between revisions of "Ethical issues in Digital Assistants"

From SI410
Jump to: navigation, search
(Created page with "ubhagat (Uddhav Bhagat) will write this mediawiki article on Ethical Issues in Digital Assistants for SI 410 Winter 2022")
 
(Added the content for the article)
Line 1: Line 1:
ubhagat (Uddhav Bhagat) will write this mediawiki article on Ethical Issues in Digital Assistants for SI 410 Winter 2022
+
'''Digital Assistants''', often referred to as [https://en.wikipedia.org/wiki/Virtual_assistant virtual assistants], are artificial intelligent systems that understand and carry out tasks for humans. They are expected to help humans carry out basic and routine tasks to make time for humans to carry out more complex tasks. Digital assistants are extremely popular in today’s technological age, with the most popular ones being [https://www.amazon.com/b?ie=UTF8&node=21576558011 Amazon’s Alexa], [https://www.apple.com/siri/ Apple's Siri], [https://www.microsoft.com/en-us/cortana Microsoft's Cortana], [https://www.ibm.com/watson IBM’s Watson], and [https://store.google.com/us/product/google_home?hl=en-US Google Home] <ref> Gray, P. (2019, September 16). The Rise of Intelligent Virtual Assistants. Interactions. https://www.interactions.com/blog/intelligent-virtual-assistant/rise-intelligent-virtual-assistants/. </ref> . Digital assistants may have different methods of interacting with the user such as voice, text, uploading images, among others <ref> Computer Hope. (2021, December 30). What is the Digital Assistant? Digital Assistant. https://www.computerhope.com/jargon/d/digital-assistant.htm. </ref>.
 +
 
 +
Digital assistants may definitely make the daily activities of human’s easier by automating tasks, but there are still numerous ethical concerns tied to them <ref name="Wilson"> Wilson, R., &amp; Iftimie, I. (2021). Virtual assistants and privacy: An anticipatory ethical analysis. 2021 IEEE International Symposium on Technology and Society (ISTAS). https://doi.org/10.1109/istas52410.2021.9629164 . </ref>. For example, one of the biggest concerns is regarding privacy and who has access to the information collected by the device. Additionally, they can also threaten the security and privacy of the owner as these devices hear and record the voices in the vicinity even when they are not being used <ref name="Wilson"> Wilson, R., &amp; Iftimie, I. (2021). Virtual assistants and privacy: An anticipatory ethical analysis. 2021 IEEE International Symposium on Technology and Society (ISTAS). https://doi.org/10.1109/istas52410.2021.9629164 . </ref>. 
 +
 
 +
==History of Digital Assistants==
 +
Humans started communicating with machines back in the 1960’s. One of the very first natural language processing computer programs called Eliza was developed by an MIT professor named Joseph Weizenbaum. Soon after, IBM developed the Shoebox which was able to recognize human speech. The shoebox served as a voice-activated calculator. The next breakthrough in this space came in the 1970s when “Harpy” was introduced, which understood around 1000 English spoken words. In the next decade, IBM capitalized on the success of Shoebox and introduced “Tangora”, which was a voice recognizing typewriter that could recognize around 20,000 English words.
 +
 
 +
==How does a Digital Assistant work==
 +
Digital assistants use machine learning algorithms which are continuously improving. The workings of a digital assistant can essentially be broken down into 3 steps. The first step is converting human speech to text that the computer program understands. This step uses Natural Language Processing and Natural Language Understanding to achieve the goal. The next step is to convert the text to an intent and help the computer understand what exactly the user means by the voice/text command. Typically, it is important to have the digital assistant connected to the internet. Finally, the third step is to convert the intent to an action. In this step, the digital assistant sends the user’s request to its database, where it is broken down into separate smaller commands and compared against similar requests. After this process is complete, a response is sent to the digital assistant, which is then communicated to the user.
 +
 
 +
A common issue in digital assistants is the inaccuracy of results. For example, if you ask a digital assistant to play a certain genre of music, and it plays a different genre then humans generally command them to stop immediately. This teaches the device that the response it received from its servers might be wrong. This is then communicated to the servers which work on improving the machine learning algorithms present on the server side to provide accurate responses to the commands given by the user.
 +
 
 +
IBM developed the first virtual assistant, Simon. “Simon was a digital speech recognition technology that became a feature of the personal computer with IBM and Phillips”. As years went by, the first chatbot was active on messaging platforms used back then like AIM and MSN Messenger. In 2010, Apple integrated voice digital assistants in the iPhone, which turned out to be a big breakthrough on how humans interact with digital assistants. Google, Amazon, IBM, Microsoft, and other technology companies followed Apple and introduced their own versions of digital assistants to the general public.
 +
 
 +
==Functions of a Digital Assistant==
 +
Digital assistants provide the user a variety of functions. Even though each digital assistant may have their own set of functions and services it can provide, some functions which appear to be common across all of them are provided below -:
 +
* Find information online such as weather details, latest news, sports scores, etc
 +
* Make reminders and set alarms
 +
* Send messages and make calls
 +
* Help with playing music through various streaming services such as Spotify, Apple Music, etc
 +
* Control smart devices such as smart lights, locks, etc
 +
* Provide translation of languages on the go
 +
* Make reservations for restaurants, taxi services like Uber and Lyft
 +
* Read notifications and allow you to respond to them if needed
 +
* Act as a digital encyclopedia by looking up answers to questions
 +
 
 +
==Ethical Concerns==
 +
===Privacy===
 +
Most of the digital assistants require voice activation to use the device. For example, the Amazon Echo which uses Amazon Alexa, always listens for the word “Alexa”. This means that these devices are constantly listening to their surroundings. These digital assistants are often activated by words sounding similar to their wake words. “This situation raises privacy concerns for end users who may have private communication that address for e.g. their financial, their emotional, or their health issues recorded without their consent. Unintended voice recordings can also be the result of malicious voice commands”.
 +
 
 +
Furthermore, digital assistants are essentially just the client which sends requests to the cloud server to carry out the request for the user. The companies who make these digital assistants have access to the recordings of the users. Even though the user can go to the cloud and delete these recordings, they run the risk of negatively impacting their customer experiences. French sociologist Antonio A. Casilli states that it raises a significant privacy concern when digital assistant companies use the recordings of their customers to further train their digital assistants to improve the accuracy of their results. Additionally, more often than not, people generally are not aware of this fact. Furthermore, there are newspaper reports claiming that Google employees are constantly listening to the voice recordings picked up by the Google Home. These recordings have conversations that should not be recorded in the first place and might contain sensitive information. However, Google has issued many statements since then to assure the users that they do not listen to the conversations, and they are just recorded to improve their existing assistants to ensure there is a better user experience.
 +
 +
There is a significant privacy concern for the people surrounding the user even though they are just in the vicinity of the digital assistant. For example, a digital assistant may record the conversations taking place in the background of the legitimate user which may include personal details like business dealings, private conversations, among other things.
 +
 +
Former employees and current executives of some of the big tech companies developing digital assistants have raised concerns about the data collected by these companies and whether collecting and using this data is ethically permissible. For instance, a former Amazon employee told the Guardian “Having worked at Amazon, and having seen how they used people’s data, I knew I couldn’t trust them”, after Alexa seemed to be repeating previous requests that had already been completed.
 +
 
 +
===Security===
 +
Digital assistants can often pose a great security risk to their users. Due to the nature of the digital assistants they more often than not require to be activated by voice commands. As a result, a TV, music player, or radio can cause the device to become active. Once they are active, they record everything and constantly communicate with their servers. This could lead to the device eavesdropping and carrying out instructions based on what they hear even though they are not directly spoken to. An example of this is when Amazon’s Alexa had recorded a part of their private conversation and sent it to a random person on their contact list. Amazon released a statement following this incident saying that this was due to misinterpreted speech. “Researchers have found some speakers activating by mistake up to 19 times each day.”
 +
 
 +
Hackers and other malicious people have utilized the security flaws in digital assistants to gain valuable information. For example, they could place orders, send malicious messages, leak credit card information, among other things. “Attackers often find ways into a home through the router or an unsecured wireless network. Voice assistants add another vector that allows them to bridge attacks, using an audio device—such as a TV or even a loud car radio on the street—to issue commands to the devices.” A new technique hackers have discovered is using inaudible ultrasonic waves to hack into digital assistants. These attacks are called “surfing attacks”. Some digital assistants have developed an authentication method which allows registered users to activate the digital assistant. To find another way to hack into the system, hackers developed a “method called “phoneme morphing”, where the device is tricked into thinking a modified recording of an attacker’s voice is the device’s registered user”.
 +
 
 +
Digital assistants have the ability to be connected to other smart devices in the home like a smart bulb, alarms, locks, security cameras etc. This may pose an additional security risk as the hacker may gain access to these devices as well. For example, there are many smart locks which can be locked and unlocked using an Alexa or a Google Home by just providing a PIN. This PIN can be compromised in numerous ways which could be dangerous for the physical security of the user. Additionally, researchers have found a new way to hack digital assistants using laser lights. The lights can be directed to the speaker’s sensor and the device can mistake this as a sound. Researchers at the University of Michigan were able to successfully control a Google Home device from 230 feet away, and open doors and switch lights off of the house among other activities. This attack, even though there are many specifications to getting it executed successfully, can pose a multitude of security concerns for the user.
 +
 
 +
===Reinforcing Gender Biases===
 +
“Women are expected to assume traditional “feminine” qualities to be liked, but must simultaneously take on—and be penalized for—prescriptively “masculine” qualities, like assertiveness, to be promoted. As a result, women are more likely to both offer and be asked to perform extra work, particularly administrative work—and these “non-promotable tasks” are expected of women but deemed optional for men.” Furthermore, women are perceived as care-givers, which is why there “seems to be a tendency to associate feminine voices with warm and tender figures.”
 +
 
 +
Digital assistants, especially voice assistants, have the default voice set to a female’s voice, which reinforces the gender biases in today’s society. Considering the 4 big technology companies that make these digital assistants, Amazon's Alexa uses a female-sounding voice, Apple’s Siri uses a female-sounding voice, Microsoft’s Cortana uses a female-sounding voice, and Google’s Google assistant also does the same.  A research report conducted by researchers concluded that users treat the digital assistants based on the gender roles and stereotypes associated with their voices, which are either male or female sounding. This also reflects on how male dominant teams view and think about women, and often form a connection between women and being subservient.
 +
 
 +
A researcher at Harvard University who studies unconscious bias says that every time we order the female sounding digital assistant to carry out a task - be it ordering something online, or booking a flight ticket - the stereotype of associating  a woman as an assistant is strengthened.
 +
 
 +
Thus, by using feminine voices, digital assistants are reinforcing gender stereotypes.
 +
 
 +
===Misuse of Data===
 +
The data collected by the manufacturing companies of digital assistants can be used for various purposes without the knowledge of the user. One of the main concerns raised is that the sensitive data available to the digital assistants can be accessed by the manufacturers or by other third parties to provide certain functionalities. “The users are generally ignorant as to with whom the data are shared, this being referred to as the breach of the data sharing privacy.”
 +
 
 +
BLOCK QUOTES -:
 +
“If the user data is shared with malicious third parties, in that case, it can be used for some criminal purposes giving rise to criminal-intent privacy. Whatever data is collected by the VAs, the users are the data-owners and, therefore, have the right to decide whether they want to keep them for further processing or delete them. In the absence of such a scenario, it gives rise to user-rights privacy. Many times, before using a service, a user must agree to certain terms and conditions, often not realizing the extent to which their personal information can be collected and utilized, thereby breaching the user-consent privacy.”
 +
 
 +
As per research and studies conducted, audio data can be extremely valuable for inference of information. Various characteristics of the user such as their body measurements, age, gender, personality traits, physical health, among other things can be inferred by just their audio. “Additional sounds produced by the end users (e.g., coughing and laughing) and background noises (e.g., pets or vehicles) provide further information”. This adds more responsibility on the end user to respect the customer’s privacy and not share their data to other third parties and use it to negatively impact the users.
 +
 
 +
==References==

Revision as of 19:16, 27 January 2022

Digital Assistants, often referred to as virtual assistants, are artificial intelligent systems that understand and carry out tasks for humans. They are expected to help humans carry out basic and routine tasks to make time for humans to carry out more complex tasks. Digital assistants are extremely popular in today’s technological age, with the most popular ones being Amazon’s Alexa, Apple's Siri, Microsoft's Cortana, IBM’s Watson, and Google Home [1] . Digital assistants may have different methods of interacting with the user such as voice, text, uploading images, among others [2].

Digital assistants may definitely make the daily activities of human’s easier by automating tasks, but there are still numerous ethical concerns tied to them [3]. For example, one of the biggest concerns is regarding privacy and who has access to the information collected by the device. Additionally, they can also threaten the security and privacy of the owner as these devices hear and record the voices in the vicinity even when they are not being used [3].

History of Digital Assistants

Humans started communicating with machines back in the 1960’s. One of the very first natural language processing computer programs called Eliza was developed by an MIT professor named Joseph Weizenbaum. Soon after, IBM developed the Shoebox which was able to recognize human speech. The shoebox served as a voice-activated calculator. The next breakthrough in this space came in the 1970s when “Harpy” was introduced, which understood around 1000 English spoken words. In the next decade, IBM capitalized on the success of Shoebox and introduced “Tangora”, which was a voice recognizing typewriter that could recognize around 20,000 English words.

How does a Digital Assistant work

Digital assistants use machine learning algorithms which are continuously improving. The workings of a digital assistant can essentially be broken down into 3 steps. The first step is converting human speech to text that the computer program understands. This step uses Natural Language Processing and Natural Language Understanding to achieve the goal. The next step is to convert the text to an intent and help the computer understand what exactly the user means by the voice/text command. Typically, it is important to have the digital assistant connected to the internet. Finally, the third step is to convert the intent to an action. In this step, the digital assistant sends the user’s request to its database, where it is broken down into separate smaller commands and compared against similar requests. After this process is complete, a response is sent to the digital assistant, which is then communicated to the user.

A common issue in digital assistants is the inaccuracy of results. For example, if you ask a digital assistant to play a certain genre of music, and it plays a different genre then humans generally command them to stop immediately. This teaches the device that the response it received from its servers might be wrong. This is then communicated to the servers which work on improving the machine learning algorithms present on the server side to provide accurate responses to the commands given by the user.

IBM developed the first virtual assistant, Simon. “Simon was a digital speech recognition technology that became a feature of the personal computer with IBM and Phillips”. As years went by, the first chatbot was active on messaging platforms used back then like AIM and MSN Messenger. In 2010, Apple integrated voice digital assistants in the iPhone, which turned out to be a big breakthrough on how humans interact with digital assistants. Google, Amazon, IBM, Microsoft, and other technology companies followed Apple and introduced their own versions of digital assistants to the general public.

Functions of a Digital Assistant

Digital assistants provide the user a variety of functions. Even though each digital assistant may have their own set of functions and services it can provide, some functions which appear to be common across all of them are provided below -:

  • Find information online such as weather details, latest news, sports scores, etc
  • Make reminders and set alarms
  • Send messages and make calls
  • Help with playing music through various streaming services such as Spotify, Apple Music, etc
  • Control smart devices such as smart lights, locks, etc
  • Provide translation of languages on the go
  • Make reservations for restaurants, taxi services like Uber and Lyft
  • Read notifications and allow you to respond to them if needed
  • Act as a digital encyclopedia by looking up answers to questions

Ethical Concerns

Privacy

Most of the digital assistants require voice activation to use the device. For example, the Amazon Echo which uses Amazon Alexa, always listens for the word “Alexa”. This means that these devices are constantly listening to their surroundings. These digital assistants are often activated by words sounding similar to their wake words. “This situation raises privacy concerns for end users who may have private communication that address for e.g. their financial, their emotional, or their health issues recorded without their consent. Unintended voice recordings can also be the result of malicious voice commands”.

Furthermore, digital assistants are essentially just the client which sends requests to the cloud server to carry out the request for the user. The companies who make these digital assistants have access to the recordings of the users. Even though the user can go to the cloud and delete these recordings, they run the risk of negatively impacting their customer experiences. French sociologist Antonio A. Casilli states that it raises a significant privacy concern when digital assistant companies use the recordings of their customers to further train their digital assistants to improve the accuracy of their results. Additionally, more often than not, people generally are not aware of this fact. Furthermore, there are newspaper reports claiming that Google employees are constantly listening to the voice recordings picked up by the Google Home. These recordings have conversations that should not be recorded in the first place and might contain sensitive information. However, Google has issued many statements since then to assure the users that they do not listen to the conversations, and they are just recorded to improve their existing assistants to ensure there is a better user experience.

There is a significant privacy concern for the people surrounding the user even though they are just in the vicinity of the digital assistant. For example, a digital assistant may record the conversations taking place in the background of the legitimate user which may include personal details like business dealings, private conversations, among other things.

Former employees and current executives of some of the big tech companies developing digital assistants have raised concerns about the data collected by these companies and whether collecting and using this data is ethically permissible. For instance, a former Amazon employee told the Guardian “Having worked at Amazon, and having seen how they used people’s data, I knew I couldn’t trust them”, after Alexa seemed to be repeating previous requests that had already been completed.

Security

Digital assistants can often pose a great security risk to their users. Due to the nature of the digital assistants they more often than not require to be activated by voice commands. As a result, a TV, music player, or radio can cause the device to become active. Once they are active, they record everything and constantly communicate with their servers. This could lead to the device eavesdropping and carrying out instructions based on what they hear even though they are not directly spoken to. An example of this is when Amazon’s Alexa had recorded a part of their private conversation and sent it to a random person on their contact list. Amazon released a statement following this incident saying that this was due to misinterpreted speech. “Researchers have found some speakers activating by mistake up to 19 times each day.”

Hackers and other malicious people have utilized the security flaws in digital assistants to gain valuable information. For example, they could place orders, send malicious messages, leak credit card information, among other things. “Attackers often find ways into a home through the router or an unsecured wireless network. Voice assistants add another vector that allows them to bridge attacks, using an audio device—such as a TV or even a loud car radio on the street—to issue commands to the devices.” A new technique hackers have discovered is using inaudible ultrasonic waves to hack into digital assistants. These attacks are called “surfing attacks”. Some digital assistants have developed an authentication method which allows registered users to activate the digital assistant. To find another way to hack into the system, hackers developed a “method called “phoneme morphing”, where the device is tricked into thinking a modified recording of an attacker’s voice is the device’s registered user”.

Digital assistants have the ability to be connected to other smart devices in the home like a smart bulb, alarms, locks, security cameras etc. This may pose an additional security risk as the hacker may gain access to these devices as well. For example, there are many smart locks which can be locked and unlocked using an Alexa or a Google Home by just providing a PIN. This PIN can be compromised in numerous ways which could be dangerous for the physical security of the user. Additionally, researchers have found a new way to hack digital assistants using laser lights. The lights can be directed to the speaker’s sensor and the device can mistake this as a sound. Researchers at the University of Michigan were able to successfully control a Google Home device from 230 feet away, and open doors and switch lights off of the house among other activities. This attack, even though there are many specifications to getting it executed successfully, can pose a multitude of security concerns for the user.

Reinforcing Gender Biases

“Women are expected to assume traditional “feminine” qualities to be liked, but must simultaneously take on—and be penalized for—prescriptively “masculine” qualities, like assertiveness, to be promoted. As a result, women are more likely to both offer and be asked to perform extra work, particularly administrative work—and these “non-promotable tasks” are expected of women but deemed optional for men.” Furthermore, women are perceived as care-givers, which is why there “seems to be a tendency to associate feminine voices with warm and tender figures.”

Digital assistants, especially voice assistants, have the default voice set to a female’s voice, which reinforces the gender biases in today’s society. Considering the 4 big technology companies that make these digital assistants, Amazon's Alexa uses a female-sounding voice, Apple’s Siri uses a female-sounding voice, Microsoft’s Cortana uses a female-sounding voice, and Google’s Google assistant also does the same. A research report conducted by researchers concluded that users treat the digital assistants based on the gender roles and stereotypes associated with their voices, which are either male or female sounding. This also reflects on how male dominant teams view and think about women, and often form a connection between women and being subservient.

A researcher at Harvard University who studies unconscious bias says that every time we order the female sounding digital assistant to carry out a task - be it ordering something online, or booking a flight ticket - the stereotype of associating a woman as an assistant is strengthened.

Thus, by using feminine voices, digital assistants are reinforcing gender stereotypes.

Misuse of Data

The data collected by the manufacturing companies of digital assistants can be used for various purposes without the knowledge of the user. One of the main concerns raised is that the sensitive data available to the digital assistants can be accessed by the manufacturers or by other third parties to provide certain functionalities. “The users are generally ignorant as to with whom the data are shared, this being referred to as the breach of the data sharing privacy.”

BLOCK QUOTES -: “If the user data is shared with malicious third parties, in that case, it can be used for some criminal purposes giving rise to criminal-intent privacy. Whatever data is collected by the VAs, the users are the data-owners and, therefore, have the right to decide whether they want to keep them for further processing or delete them. In the absence of such a scenario, it gives rise to user-rights privacy. Many times, before using a service, a user must agree to certain terms and conditions, often not realizing the extent to which their personal information can be collected and utilized, thereby breaching the user-consent privacy.”

As per research and studies conducted, audio data can be extremely valuable for inference of information. Various characteristics of the user such as their body measurements, age, gender, personality traits, physical health, among other things can be inferred by just their audio. “Additional sounds produced by the end users (e.g., coughing and laughing) and background noises (e.g., pets or vehicles) provide further information”. This adds more responsibility on the end user to respect the customer’s privacy and not share their data to other third parties and use it to negatively impact the users.

References

  1. Gray, P. (2019, September 16). The Rise of Intelligent Virtual Assistants. Interactions. https://www.interactions.com/blog/intelligent-virtual-assistant/rise-intelligent-virtual-assistants/.
  2. Computer Hope. (2021, December 30). What is the Digital Assistant? Digital Assistant. https://www.computerhope.com/jargon/d/digital-assistant.htm.
  3. 3.0 3.1 Wilson, R., & Iftimie, I. (2021). Virtual assistants and privacy: An anticipatory ethical analysis. 2021 IEEE International Symposium on Technology and Society (ISTAS). https://doi.org/10.1109/istas52410.2021.9629164 .