Watson, the product of IBM's general laboratory, DeepQA , is a question answering supercomputer used to communicate in natural language. It was created under a team led by David Ferrucci and is named after IBM's first CEO, Thomas J. Watson.  Researchers developing this technology were at the forefront of creating an enhanced data machine that observed, interpreted, evaluate, and decide similar to human processes at extremely fast speeds.  Most of the data that Watson encounters is unorganized data, which makes up almost 80% of the info sphere. Watson's production began in 2007 after a executive at IBM challenged Ferrucci to create a computer system that could defeat past winners of the TV quiz show "Jeopardy!"  Currently, Watson is used at Memorial Sloan Kettering Cancer Center, New York City and at WellPoint, an insurance company to make moral decisions of lung cancer treatments. 
Software and Hardware
Watson uses complex high level thinking in several forms: natural language processing, information retrieval, knowledge representation and automated reasoning. This is made possible through IBM’s DeepQA Software and Apache UMIA framework. The system was coded in multiple languages including C++, JAVA and Prolog. 
Watson’s hardware base uses a massively parallel POWER7 processor to with ninety IBM Power 750 servers on a 3.5 GHz POWER7 eight-core. In total, the supercomputer has 16 terabytes of RAM. At this speed of processing, Watson is capable of retrieving 500 gigabytes per second or one million books in this time span. 
IBM first contacted Harry Friedman, the executive producer of Jeapordy!, in 2008 regarding the possibility of Watson appearing on the show.  Before the challenge could take place, it was necessary to determine a method of structuring the competition so that the machine and humans could compete on a relatively even field. There was initially some worry that the humans would be unable to compete with Watson due to the show’s structure. A contestant who wishes to answer the question must be the first to activate a buzzer, which only became available after the host had finished reading the question out loud, typically six or seven seconds after question was first presented. A light in the buzzer notifies the contestants when they are able to answer. On average, it will take a human roughly a tenth of a second to perceive the light and activate the buzzer, whereas Watson is able to signal the buzzer in approximately eight milliseconds.  However, this was ultimately found to be an advantage for the human contestants, as it often took Watson longer than seven seconds to arrive at an answer.  Furthermore, humans will occasionally hit the buzzer before they have a response prepared, buying themselves a few extra seconds to reach the answer, which Watson is not programmed to do. 
For the match, Watson was represented by a globe avatar. The match took place over two days and both contestants maintained secrecy of the match until it aired on January 14, 2011. After the first day, Watson maintained a substantial lead over the other two contestants. Watson ended with $35,734, Rutter with $10,400, and Jennings with $4,800. After the second day, all three scores were added up. Watson ended with $77,147, Jennings with $24,000 and Rutter with a score of $21,600. The prize money for 1st place was 1 million, followed by $300,000 for 2nd place and $200,000 for 3rd place. All three contestants donated the money to charity.  
More recently, Watson has been in the news for helping clients file taxes at H&R Block. The company states that with Watson, clients will be able to maximize every opportunity to receive credit and deductions, provide tips, and assist clients make informed decisions for the following year. Clients at H& R block will have to pay no additional fee.  Clients will also have the help of a tax professional alongside Watson to assist them when they come in. According to IBM developers, Watson has now learned the language of tax and will now be able to cross-reference numerous topics learn during the filing process. Watson will be able to assess clients' statements and draw conclusions based on it. H& R Block claims that this will reduce liabilities and maximize efficiency. 
Watson has already taken higher level roles beyond its initial use on Jeopardy. Its current purpose revolves around making decisions for patients on treatment options for lung cancer. IBM Watson group has already explored the possibility of new roles in legal research, government and financial services. With its updated roles, there is greater risk of ethical conundrums. 
Healthcare Role and Decision Making
Watson’s role in the Memorial Sloan Kettering Cancer Center has directed 90% of the decisions made by nurses.  With 10% at stake, it presents a fear among medical professionals in its decision making. In statistics, presets of a 95% confidence level are used in the medical field thus Watson is shy 5%. The idea of Watson determining treatment options and the choice made at the end may be costly and may inadvertently incorrect. Even though patient autonomy is still conserved while using Watson, the fear is that Watson may be relied upon so heavily that patients may make decisions based on Watson’s data.  Watson’s role as a decision maker visits the idea presented by Luciano Floridi in Information Ethics. Floridi claims that our reliance on artificial intelligence like Watson is based on “delegating or outsourcing to artificial agents our memories, decisions, routine tasks, and other activities”. 
Though IBM claims its security revolving around Watson is quite fortified, it has not brushed off the idea of potential hackers. Recently, Watson has been used in crime fighting. One of IBM’s major tenets is “a peaceful global community”.  Watson currently has access to numerous databases from law enforcement and police departments. Retrieval of this information along with patient records breach privacy rights from all angles.  Now, this is unlikely however Watson can be targeted from any country and hacking into its database may be detrimental.
The Use of a Masculine/Man-Identified Voice in Artificially Intelligent Systems
Of the big four major technology corporations, Apple uses the feminine voice of Siri, Microsoft uses the feminine voice of Cortana, Amazon uses the feminine voice of Alexa, Google uses a feminine voice on their Google Home assistants. While Apple and Google offer options to the user to change the voice of their assistants, not all platforms offer such options, and as it is, the feminine voices are the default settings. According to user research from the Nielsen Norman Group, default options are important choices made by interface designers, as defaults are seldom, if ever, changed by the majority of users.
Women are often expected to be in administrative roles, as secretaries, assistants, and other such service roles in the workplace. Additionally, women are often conditioned to display mothering behaviors, and this can affect the perception of women overall in the culture. Some say that this prevalence, which is due to many cultural and social factors in its own right, predisposes society to prefer listening to women's voices. It is worth noting, though, that feminine or women's voices are more heavily and acutely critiqued as grating, for participating in "uptalk," for having vocal fry, etc. than men's voices are.
One human-computer interaction ethicist posited that the stature of feminine voices as assistants in artificially intelligent software speaks to how the primarily masculine teams view and think about women, even stating that she feels it reflects that these men view women not only as subservient, but also as less than human. Regardless of this perception, the prevalence of so many feminine voices as artificially intelligent assistants "hard-codes a connection between a woman's voice and subservience." Unconscious bias comes from somewhere, and that somewhere is the cultural knowledge we "absorb from the world," that then is reflected outwards into the world unconsciously through individuals' choices, behaviors, and habits. Therefore ascribing women and feminine voices, personas, and names to subservient tools can promote these linkages between subservience and women, even more so than they are already entrenched in heteropatriarchal society.
Conversely, using a masculine or man-identified voice in the Watson system, IBM sets a different precedent. It is worth questioning whether or not IBM would have chosen a feminine or woman-identified voice for this artificially intelligent system, seeing as Watson does not perform the same kinds of administrative or secretarial tasks as the Apple Siri, Amazon Alexa, Google Home, or Microsoft Cortana do.
- Watson homepage
- About Watson on Jeopardy.com
- Smartest Machine on Earth (PBS NOVA documentary about the making of Watson)
- Power Systems
- The Real Reason Voice Assistants are Female (And Why It Matters)
- The Power of Defaults
- Why Do So Many Digital Assistants Have Feminine Names?
- This American Life, If You Don't Have Anything Nice to Say, Say it in All Caps.
- Rise of the Fembots: Why Artificial Intelligence Is Often Female
- Stop Giving Digital Assistants Female Voices