Artificial Intelligence and Technology
- 1 History
- 2 Problems
- 3 Examples
- 4 Ethics of Artificial Intelligence
- 5 Tools
- 6 External Links
- 7 References
The idea of artificial intelligence dates back to the invention of the modern computer, but the concept did not expand until the middle of the 20th century. In 1956, John McCarthy coined the term “artificial intelligence” during the first discussion about the topic at the Dartmouth Conference. In 1962, Arthur Samuel, who at the time was working for IBM, finished the development of the first program able to play checkers. In that same year, Samuel's program defeated renowned checkers player Robert Nealey in a publicized match. One year later, in 1963, Thomas Evans demonstrated that computers are able to solve the same types of questions that humans typically solve on IQ exams. The year after that, this type of problem solving was expanded further when Danny Bobrow demonstrated that computers are able to understand human speech sufficiently enough to solve word problems.
The 1970s led to the development of natural language processing, or the ability for computers to interpret human speech. The 1980s led to some more developments in the field, but it wasn’t until the 1990s when major advances were made. Advances in the fields of machine learning, intelligent tutoring, case-based reasoning, uncertain reasoning, natural language processing, as well as language translation all contributed to the further development of AI. In the 2000s, advances in robots were accomplished and put to use towards scientific investigation, car racing, and children’s toys. Today, natural language processing and dictation are beginning to play a role in the advancement of mobile technology.
The central question to be asked with regard to artificial intelligence is, Can computers act and think like humans? Related questions include: Are they able to reason deductively? Can they solve complex problems? Can they answer questions? Can they learn new material? Can they process human speech?
While humans are able to deductively reason easily, computers can only do so for a while using step-by-step algorithms. Such simple algorithmic formulas are a good solution until problems become too complex. Research is currently being devoted to making machines reason faster by using more complex algorithms.
Learning is intuitive to humans, but not to computers. Machine learning dates back to the first artificial intelligence conference, in 1956. It involves the ability to recognize patterns, adapt to them, and use that knowledge to help solve future problems. This can be very helpful in processing natural language.
Natural Language Processing
Natural language processing is the ability for computers to listen to, interpret, and understand human language, both written and spoken. This has applications in many fields, including virtual personal assistants, information retrieval, and data mining.
Intelligent agents must be able to understand the emotions and motives of others in order to predict their actions.  They must also be able to display emotions themselves. At the most basic level, intelligent agents must appear polite and sensitive to human interaction.
Humans can visualize the future. They have a representation of the current state of their surroundings and are able to make predictions about how their actions will affect their surroundings. Intelligent agents must possess similar abilities in planning and prediction, in order to make decisions that maximize the value of the available choices. It's most important that humans prepare for the rise of artificial intelligence and ensure that the intelligence is beneficial to society. Whatever form A.I. comes in, it will no doubt be incredibly good at achieving its goals or tasks. This could prove to be quite dangerous for us. For example, let's say your the president of the University of Michigan and you need to build a new research lab. You're probably not an advocate of chopping down expanses of trees and deforestation, but you need clear what you can for the new lab. The same is true for humans. If the A.I. is the President of UofM and we're the trees, then our future is not bright and we should ensure we never place ourselves in this kind of situation.
Creating emotion in Artificial Intelligence will be the most difficult thing to do. It is a fact that emotions exist, people have learned to use them as gauges on their consciences. Just as the pleasure-pain mechanism of man’s body is an automatic indicator of his body’s welfare or injury, a barometer of its basic alternative, life or death—so the emotional mechanism of man’s consciousness is geared to perform the same function, as a barometer that registers the same alternative by means of two basic emotions: joy or suffering. Emotions are the automatic results of man’s value judgments integrated by his subconscious; emotions are estimates of that which furthers man’s values or threatens them, that which is for him or against him—lightning calculators giving him the sum of his profit or loss. The question is, what value judgments will Artificial Intelligence be able to make? If it can make only those that are in its basic programming, they will be only those the programmer gives it. It all depends on the values of the programmer and what they decide to give the equipment. It'll be a challenge to teach Artificial Intelligence to learn values from its emotions.
Recently, IBM developed a computer they named “Watson,” which is a question answering machine that over the course of three days(February 14, 2011 - February 16, 2011) defeated the quiz show Jeopardy!'s all time money leader (Brad Rutter) and the show's longest running champion (Ken Jennings). The computer was consistently faster at buzzing in than its human competitors, but struggled on short clues. The computer can receive a question in text format and then formulate and verbalize an answer. Not only is it able to solve questions with direct clues, it is also able to decipher aspects of language unique to humans, such as sarcasm, puns, and metaphors.
Siri is a company that was bought by Apple in April of 2010. Then, it was an application for mobile devices that acted as a virtual personal assistant, able to answer questions, schedule meetings, get directions, as well as a variety of other tasks. In addition to Apple’s recent announcement of the iPhone 4S, they have also announced that Siri will act as a personal assistant within the new iPhone. The underlying core technology behind siri is speech recognition. It enables the user to use his or her voice to send text messages, schedule meetings, make phone calls, and set reminders. Not only is Siri able understand what you say, Siri is also able to interpret the meaning behind what you are saying. For example, Siri can understand the phrases “Tell my brother that I am running 20 minutes late.”, and “Am I going to need a raincoat today,”. Siri is one of a number of virtual assistants today,  available on Mac OSX, IOS, and on Apple Home.
Recently controversy over the programming of Siri in regards to abortion has come to light. Pro-choice advocates alleged that Google programmed Siri to be pro-life. This is because when one asked Siri to search for local abortion clinics, Siri would not display any Planned Parenthood that were known to be in the area. These groups also alleged that even when names of specific abortion clinics were inputted, Siri would be unable to find them. Google responded by releasing a statement saying that it was a technical glitch and would be fixed when the final version of Siri is released. The cause of the glitch seemed to come from where Siri would retrieve information from to search for specific places. Google reported that Siri gets information from Yelp, where Planned Parenthood is not categorized as an abortion clinic. Also Siri uses Wolfram Alpha as its main query site, which presents answers differently than other search engines such as Google and Yahoo. This was another possible reason that was given for the glitch.
AI in Bioware Games
Main Article: BioWare BioWare is a video game developer that implements many sci-fi and fantasy themes in their game environments - including the use of A.I. as a controllable character and a benevolent entity.
Furby is a childrens toy developed by Richard Levy in 1998. It's a owl/mouse-like toy robot that has a degree of artificial intelligence built in. When purchased, the toy can only speak "Furbish," a made up language that all Furbies speak at first. As the owner plays with it, the Furby is programmed to slowly "learn" English over time. The toy also claims to behave differently based on how the owner treats it. For example, after being set in a quiet environment for long enough, the toy will announce that it's bored and fall asleep. Some controversy and popularity about Furbies arose because it did not come with an "off" button so many parents became annoyed at it's constant chatting. In 2005 and 2012, new versions of Furbies were released. . The latest version includes a mobile app that allows owners to "feed" the Furby and many other features.
In 2016, Microsoft Research and Bing teams partnered to unveiled Tay.AI (commonly referred to as Tay), an artificial intelligence chatbot to experiment with and conduct research on conversational understanding. Tay was targeted at individuals aged 18-24 in the United States and was designed to engage and entertain users through conversation.  While Tay's initial tweets were positive and benevolent, as users began to engage Tay more, Tay began broadcasting racist tweets including one in support of Hilter's hatred for Jews causing Microsoft to decommission the bot. Brandon Wirtz, the creator of Recognant, a cognitive computing and artificial intelligence (AI), argues that the reason for Tay's failure was due to the absence of embeded values. In an interview with technology journal site Readwrite, Wirtz remarked, "(Tay) didn’t know that she should just ignore the people who act like Nazis, and so she became one herself." 
Ethics of Artificial Intelligence
One question that could be asked is, “If computers are able to think like humans, understand humans, and react to humans, do they deserve the same rights as humans?” Computers are now able to make ethical decisions, whether intended or not, and these decisions affect the objects that interact with them and for that reason is why these artificial Intelligence agents are being considered to be held accountable for these actions. While some might think that this is a silly question, it is actually a pertinent one that must be taken into account when designing artificial beings. This concept is part of the main idea of Luciano Floridi's ethical model of the Infosphere, where all information entities have the power to do good and evil in the world.
Another important question to ask is "If we are soon able to build computers that have the ability to surpass our own intelligence, what's to stop them from making themselves better and more intelligent until their intelligence exceeds our own by how much our own intelligence currently exceeds that of a chimpanzee?" This is really a key determinant whether or not artificial intelligence will be beneficial to society, or whether it will enslave us like it's so popularly in movies like iRobot or Terminator.
If computers are able understand human speech and thought, then with the right program computers could interpret phone conversations, text messages, emails, as well as other documents. It would be able to identify which people are having a conversation, as well as the details of that conversation. This could enable the user to do a wide variety of things with this information, ranging from identity theft to insider trading. Furthermore, as computers physically begin to process more information than humans, the development of artificial intelligence combined with increased surveillance in the public sphere could lead to cyber-stalking by machines that are able to follow the many digital traces humans leave and reveal information that humans would not be able to discover about an individual on their own.
Replacement of Human Interaction
Another factor that must be taken into account is that artificial intelligence has the potential to replace humans in multiple respects. Artificial Intelligence is already implemented in some phone operating systems, acting as customer service representatives to help customers troubleshoot. AI could replace other positions that humans currently occupy, such as bank tellers, therapists or even with self-driving vehicles. Some of the positions that Artificial Intelligence has the potential to fill require human feelings of empathy. If such positions are filled with computers they will lose this sense of empathy, which could have negative repercussions on the people using them. For example, machines using Artificial Intelligence could be asked to interpret human lives, such as privacy (as described above). Without a distinctly human connection, it might be hard for AI to accurately interpret human interactions, which in this scenario could lead to the AI misjudging the need for human beings to maintain a sense of privacy in their everyday lives. There are also human emotions, such as love, that are unable to be put into words adequately. How can a programmer design a computer to feel, understand, or relate to an emotion that humans do not fully understand yet? Human reliance on artificial technology some feel also has the possibility of decreasing critical thinking skills in humans. Artificial intelligence also runs a risk of damaging the economy and efficiency of tasks along with innovation of said tasks. 
Artificial Intelligence's Consciousness
Another ethical concern of AI is consciousness. An AI who is sufficiently smart would be able to joke with humans, and be sarcastic with people, and it would claim to feel the same emotions we do, but would it actually be feeling those things? In other terms, there is ongoing debate as to whether an AI is really conscious or just appear to be conscious? This question has very deep ethical implications. If AI were to have consciousness and an intellect that matched or exceeded humans’, would they need their own amendments to ensure their rights? In the same vein, if humans generated billions of human brain emulations that seemed and acted like humans but were artificial, is shutting them all off the same, morally, as shutting down a laptop, or is it a genocide of mass proportions? Once the consciousness of AI reaches and surpasses that of humans' new legal issues may arise regarding shutting down or "killing" an AI. Questions like these are constantly brought up on the topic of AI consciousness. Norbert Wiener describes the AI agent's consciousness as not having the capacity to be "moved by reasons" presented to them. As the agent who is "moved by reasons," or the agent that possesses consciousness, an agent is thus given the name: Artificial General Intelligence (AGI). Wiener argues that such a case of constructing an AGI is possible but not desirable, while more constrained AI that's practically possible today is not necessarily evil, precisely due to their lack of consciousness. . The example of IBM Watson, being turned into a multidimensional agent, is similar in principle to turning a hand calculator into Watson, Dennet argues - and on the road to constructing an AI consciousness, would represent the construction of a core faculty like a cerebellum or an amygdala at best. The concept of having the ability of framing purposes and plans, and building insightfully on conversational experiences, Dennet argues, is a characteristic that no artificial agent, including Watson, is capable of exhibiting. Alan Turing's idea of the operational ("Turing test") test illustrates the goal intended for AGI to bridge the "uncanny valley," in which a human behind a screen is unidentifiable as being a human, by another human subject. Weizenbaum's ELIZA was an early chatbot that first exhibited this ability to persuade people that they were having a serious conversation with a human being. The worry is the scenario where citizens are making significant decisions due to advice provided by AI systems, where the methods are unfathomable and unknowable in practice. This would result in loss of trust and conflict surrounding moral and legal accountability.
Bias in Artificial Intelligence
The case of Tay.AI is one example of bias in artificial intelligence. Bias is defined as "any belief, attitude, behavior or practice that reflects an assumed superiority of one group over another."  This permeates an artificial system in how it may discriminate against humans that it interacts with. An AI beauty contest prompted controversy when it found light skin more beautiful than dark skin. Additionally, in 2015, Google's photo-tagging AI mistakenly tagged photos of Black people as gorillas. The reason that an artificial system may have bias is that it does not have enough data to formulate unbiased judgements. It is believed that one way to remove bias from an artificial system is to have both a diverse dataset, in addition to a dataset that does not perpetuate stereotypes 
AI has been an ongoing field of research for years. During these years of advancement, AI has developed many tools to solve some of the most difficult problems in computer science.
Search and optimization
Simple search algorithms are not sufficient for most real world problems. As time goes on, more and more information is made available, and the search space quickly grows infinitely. This means that either the search will be too slow or it will never finish. "Heuristics" often times, seem to be the solution for this problems. Heuristics is the term used to eliminate choices that are unlikely to lead to the goal. Heuristics supply the program with a "best guess" for the path on which the solution lies. 
In the 1990s, a new kind of search surfaced. This was based on the mathematical theory of optimization. This makes it possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms begin the search at some random point, and then jumps around until it reaches the end of the search.
Logic is used for knowledge representation and problem solving, but can be applied in other areas as well. For example, the satplan algorithm uses logic for planning,  and inductive logic programming is a method for learning 
Several forms of logic are used in AI research. Propositional logic is the logic of statements which can be true or false. First-order logic also allows the use of quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. 
Fuzzy logic is a version of first-order logic which allows the truth of a statement to be represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy systems can be used for uncertain reasoning and have been widely used in modern industrial and consumer product control systems. 
Subjective logic models uncertainty in a different and more explicit manner than fuzzy-logic: a given binomial opinion satisfies belief + disbelief + uncertainty = 1 within a Beta distribution. By this method, ignorance can be distinguished from probabilistic statements that an agent makes with high confidence. 
Probabilistic methods for uncertain reasoning
Many problems in AI require the agent to operate with incomplete information. Researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics. 
Bayesian networks are a tools that can be used for a large number of problems. They can be used for reasoning, learning, and perception. 
Classifiers and statistical learning methods
AI applications can be divided into two types: classifiers and controllers. A Classifier uses pattern matching to determine the closest match. Classifiers can be tuned to examples or previous patterns which makes them especially useful for use in AI. A controller determines the action to be taken once it is classified.
Neural networks began in the decade before AI research was founded.  Many researchers developed this algorithm.
The main categories of networks are neural networks. They are comprised of interconnecting artificial neurons. They may be used to either gain an understanding of biological neural networks or solving AI problems.