Difference between revisions of "Virtual Assistants"

From SI410
Jump to: navigation, search
(Ethical Implications)
Line 44: Line 44:
  
 
In the spring of 2018, Google introduced "[https://www.youtube.com/watch?v=fBVCFcEBKLM Duplex]", a technology featured aimed to help Google Assistant sound more human when interacting with users <ref> Lomas, Natasha. “Duplex Shows Google Failing at Ethical and Creative AI Design.” TechCrunch, TechCrunch, 10 May 2018, techcrunch.com/2018/05/10/duplex-shows-google-failing-at-ethical-and-creative-ai-design/. </ref>. While this technology displayed a new innovative form of communication unlike what the world has seen so far, it also raised a bunch of questions surrounding ethics. Immediately following the demo of Google Duplex, critics were quick to point out that there was no indication from Google Duplex's end that they were a bot -- however, the conversational and realistic aspect of this phone call is what made this interaction so special. Whether or not Google was responsible for letting the person on the other end know that they were talking to a bot was something debated after the conference. Researcher Dr Thomas King calls question the ethics of Google Duplex by looking at the intent behind this new technology. While the ultimate goal of Google's Duplex is to create a better user experience, King questions whether or not you need to be deceitful and appear to be human for a "better" user experience. In Harry Frankfurt's article he addresses the amount of bullshit prevalent in our lives today, and notes that the success of lies is dependent on the actors ability to deceive us <ref> Kellman, Steven G. The Georgia Review, vol. 59, no. 2, 2005, pp. 431–433. JSTOR, www.jstor.org/stable/41402610. </ref>. The main issue with Google's Duplex is the deceptive misrepresentation, which is what Frankfurt specifically points out to as a contribution to creating bullshit. Regardless of whether or not Google Duplex's intentions were to deceive, the way the technology is designed is in a way that is inherently deceptive as the application is meant to mimic humanlike conversations. In an age where information is easily manipulated and falsified the truth becomes increasingly more important, especially in the new frontier of human and AI interactions.
 
In the spring of 2018, Google introduced "[https://www.youtube.com/watch?v=fBVCFcEBKLM Duplex]", a technology featured aimed to help Google Assistant sound more human when interacting with users <ref> Lomas, Natasha. “Duplex Shows Google Failing at Ethical and Creative AI Design.” TechCrunch, TechCrunch, 10 May 2018, techcrunch.com/2018/05/10/duplex-shows-google-failing-at-ethical-and-creative-ai-design/. </ref>. While this technology displayed a new innovative form of communication unlike what the world has seen so far, it also raised a bunch of questions surrounding ethics. Immediately following the demo of Google Duplex, critics were quick to point out that there was no indication from Google Duplex's end that they were a bot -- however, the conversational and realistic aspect of this phone call is what made this interaction so special. Whether or not Google was responsible for letting the person on the other end know that they were talking to a bot was something debated after the conference. Researcher Dr Thomas King calls question the ethics of Google Duplex by looking at the intent behind this new technology. While the ultimate goal of Google's Duplex is to create a better user experience, King questions whether or not you need to be deceitful and appear to be human for a "better" user experience. In Harry Frankfurt's article he addresses the amount of bullshit prevalent in our lives today, and notes that the success of lies is dependent on the actors ability to deceive us <ref> Kellman, Steven G. The Georgia Review, vol. 59, no. 2, 2005, pp. 431–433. JSTOR, www.jstor.org/stable/41402610. </ref>. The main issue with Google's Duplex is the deceptive misrepresentation, which is what Frankfurt specifically points out to as a contribution to creating bullshit. Regardless of whether or not Google Duplex's intentions were to deceive, the way the technology is designed is in a way that is inherently deceptive as the application is meant to mimic humanlike conversations. In an age where information is easily manipulated and falsified the truth becomes increasingly more important, especially in the new frontier of human and AI interactions.
 
  
 
===Designing Voice Assistants===
 
===Designing Voice Assistants===

Revision as of 11:45, 16 April 2019

Voice Assistants are artificial intelligence systems that perform human-like tasks for users, similar to a personal assistant [1] Some of the popular voice assistants include Apple's Siri, Amazon Alexa, Microsoft's Cortana, and Google Home. Many virtual assistants can carry out a number of tasks following a voice command, such as giving the weather, playing music, or controlling smart home devices.

Popular Voice Assistants

Amazon Alexa Voice System

Often known as Alexa, the Alexa Voice System is a virtual assistant created by Amazon. It works on many different Amazon devices, such as Amazon Echo, the Echo Dot, and Amazon Tap, among other Alexa smart home devices. Upon saying the wake word, which is "Alexa" by default, the user can give a command to which the assistant will carry out.

Some of the many tasks Amazon Alexa can do are answering questions, reading headlines from your favorite news outlets, set timers and alarms, control certain smart home devices, stream music, and relay weather or traffic information. [2]

Some of the tasks that the Alexa Voice System cannot preform are distinguishing between different voices, sending audio messages, and saying swear words. [3]

Alexa can also preform the task of placing orders on amazon.com via voice control. Ethical concerns arise from this because there is no "validation" step that usually exists before someone makes a purchase. There are, however, ways to disable the ordering function altogether on your Alexa, or to require conformation codes before placing an order.

In addition to the built in functionality of Alexa, is the Smart Home Skill API which allows third-party developers to create apps called skills. By February 2017 there were 10,000 skills created for Alexa. [4] A majority of these skills are related to games, news, and education, but many that have low, if any, ratings.[4]

Apple's Siri

Siri can be used on the iPhone, iPad, iPod touch, and some of the new versions of the MacBook. To use Siri and ask it a question, you can either hit the home button on your device or say "Hey Siri" followed by your question or command. [5] Apple's website boasts that "Siri does [things for you] even before you ask",[6] which indicates many ethical implications of this technology.

Google Home

Google Home is a device run from Google Assistant that can answer questions, follow commands, and control smart devices. Google can also distinguish voices and can therefore support multiple users for more personalization.[7]

Google Home Features [7]

  • Calendar: get your schedule for the day or information about an event
  • Alarm & Timer
  • Calculator
  • Dictionary
  • Multi-room audio
  • Games
  • Music: play music from various music services such as Spotify and Pandora
  • Travel Information: Check flight status, find and track flights
  • Weather & Traffic
  • Translation
  • Unit Conversions
  • TV streaming
  • Thermostat control: can be used with supported smart thermostats
  • My Day: gives a brief overview of your day including weather, reminders, calendar, news, and commute
  • Light control

Performance

In a 2016 Brian X. Chen, a technology writer for The New York Times, put together a test of 16 tasks and compared Google, Siri, Cortana, and Alexa on how well the tasks were carried out. [8] Google scored the highest, followed by Siri, Cortana, and lastly, Alexa. Chen found that Alexa was the best at tasks related to music and Siri was best at calendar and email tasks. [8] Chen also found small differences amongst the four virtual assistants as Siri could read aloud email messages, compose emails, and create calendar events, while Google could not read email aloud and Alexa was unable to create calendar events or compose emails. [8]

Ethical Implications

Google Duplex Deceptive Misrepresentation

In the spring of 2018, Google introduced "Duplex", a technology featured aimed to help Google Assistant sound more human when interacting with users [9]. While this technology displayed a new innovative form of communication unlike what the world has seen so far, it also raised a bunch of questions surrounding ethics. Immediately following the demo of Google Duplex, critics were quick to point out that there was no indication from Google Duplex's end that they were a bot -- however, the conversational and realistic aspect of this phone call is what made this interaction so special. Whether or not Google was responsible for letting the person on the other end know that they were talking to a bot was something debated after the conference. Researcher Dr Thomas King calls question the ethics of Google Duplex by looking at the intent behind this new technology. While the ultimate goal of Google's Duplex is to create a better user experience, King questions whether or not you need to be deceitful and appear to be human for a "better" user experience. In Harry Frankfurt's article he addresses the amount of bullshit prevalent in our lives today, and notes that the success of lies is dependent on the actors ability to deceive us [10]. The main issue with Google's Duplex is the deceptive misrepresentation, which is what Frankfurt specifically points out to as a contribution to creating bullshit. Regardless of whether or not Google Duplex's intentions were to deceive, the way the technology is designed is in a way that is inherently deceptive as the application is meant to mimic humanlike conversations. In an age where information is easily manipulated and falsified the truth becomes increasingly more important, especially in the new frontier of human and AI interactions.

Designing Voice Assistants

As our technology advances, the computers we rely on become more dependent and autonomous. This autonomy they possess is steps away from them to become moral agents. However, in order for a computer to become autonomous, a designer must come in to design the behavior of this agent. As the creator dictating the behavior of artificial agents, the designer bears an enormous amount of responsibility of how artificial agents act morally [11]. The trouble with artificial autonomous agents that are capable of learning and adapting to their environment, is that designer's cannot predict every action and response of these agents to ensure that they are acting appropriately. Large companies like Amazon and Google who hold the majority of the Voice Assistant market have design documentation online to help designers navigate the ambiguity of predicting the actions of autonomous agents like Alexa and Google. By have default responses created to steering conversations away from sensitive topics like politics, religion, and sexual content, Amazon and Google help designers take a step in the right direction for designing ethical and morally responsible interactions between artificial agents and end users [12].

References

  1. Gray, Phil. "The Rise of Intelligent Virtual Assistants" (1 June 2016. 24 April 2017).
  2. Crist, Ry. "Amazon Alexa: Device compatibility, how-tos and much more"CNET (8 August 2016. Retrieved 24 April 2017).
  3. Martin, Taylor. "9 things Alexa can't do yet" CNET (10 January 2017. Retrieved 24 April 2017).
  4. 4.0 4.1 Crist, Ry. "Alexa just hit 10,000 skills, but does anyone care?"CNET(23 February 2017. Retrieved 24 April 2017).
  5. [1](Retrieved 24 April 2017).
  6. [[2]]
  7. 7.0 7.1 [3](Retrieved 24 April 2017).
  8. 8.0 8.1 8.2 Chen, Brian."Siri, Alexa and Other Virtual Assistants Put to the Test" The New York Times (27 January 2016. Retrieved 24 April 2017).
  9. Lomas, Natasha. “Duplex Shows Google Failing at Ethical and Creative AI Design.” TechCrunch, TechCrunch, 10 May 2018, techcrunch.com/2018/05/10/duplex-shows-google-failing-at-ethical-and-creative-ai-design/.
  10. Kellman, Steven G. The Georgia Review, vol. 59, no. 2, 2005, pp. 431–433. JSTOR, www.jstor.org/stable/41402610.
  11. Grodzinsky, Frances S., et al. “The Ethics of Designing Artificial Agents.” Ethics and Information Technology, vol. 10, no. 2-3, 2008, pp. 115–121., doi:10.1007/s10676-008-9163-9.
  12. “Establish and Maintain Trust.” Amazon, Amazon, developer.amazon.com/docs/alexa-design/trustbusters.html.