Difference between revisions of "Artificial Intelligence and Technology"

From SI410
Jump to: navigation, search
(Background)
Line 1: Line 1:
==Background==
 
'''Artificial intelligence''' (AI) is the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. Artificial intelligence can have a huge impact on the way humans interact with each other and with technology. It was once thought that the human mind was unable to be replicated, but recent technological advances have allowed computers to simulate human intelligence. This allows computers to recognize contextual factors and other aspects of human thought that computers were previously unable to understand. Artificial intelligence has the potential to impact our everyday lives.
 
  
==A Brief History==
+
'''Artificial intelligence''' (AI) is the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. Artificial intelligence can have a huge impact on the way humans interact with each other and with technology. It was once thought that the human mind was unable to be replicated, but recent technological advances have allowed computers to simulate human intelligence. This allows computers to recognize contextual factors and other aspects of human thought that computers were previously unable to understand. Artificial intelligence has the potential to impact our everyday lives.
 +
([[Tops & Categories|Back to index]])
 +
 
 +
==History==
  
 
Although the thought of artificial intelligence dates back to the invention of the modern computer, the concept did not really expand until the middle of the 20th century. In 1956, John McCarthy coined the term “artificial intelligence” during the first discussion about the topic at the Dartmouth Conference. In 1962, IBM finished the development of the first program able to play checkers, which eventually defeated the checkers world champion. One year later, in 1963, Thomas Evans demonstrated that computers are able to solve the same types of questions that humans typically solve on IQ exams. The year after that, this type of problem solving was expanded further when Danny Bobrow demonstrated that computers are able to understand human speech sufficiently enough to solve word problems.  
 
Although the thought of artificial intelligence dates back to the invention of the modern computer, the concept did not really expand until the middle of the 20th century. In 1956, John McCarthy coined the term “artificial intelligence” during the first discussion about the topic at the Dartmouth Conference. In 1962, IBM finished the development of the first program able to play checkers, which eventually defeated the checkers world champion. One year later, in 1963, Thomas Evans demonstrated that computers are able to solve the same types of questions that humans typically solve on IQ exams. The year after that, this type of problem solving was expanded further when Danny Bobrow demonstrated that computers are able to understand human speech sufficiently enough to solve word problems.  

Revision as of 04:38, 25 November 2011

Artificial intelligence (AI) is the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. Artificial intelligence can have a huge impact on the way humans interact with each other and with technology. It was once thought that the human mind was unable to be replicated, but recent technological advances have allowed computers to simulate human intelligence. This allows computers to recognize contextual factors and other aspects of human thought that computers were previously unable to understand. Artificial intelligence has the potential to impact our everyday lives. (Back to index)

History

Although the thought of artificial intelligence dates back to the invention of the modern computer, the concept did not really expand until the middle of the 20th century. In 1956, John McCarthy coined the term “artificial intelligence” during the first discussion about the topic at the Dartmouth Conference. In 1962, IBM finished the development of the first program able to play checkers, which eventually defeated the checkers world champion. One year later, in 1963, Thomas Evans demonstrated that computers are able to solve the same types of questions that humans typically solve on IQ exams. The year after that, this type of problem solving was expanded further when Danny Bobrow demonstrated that computers are able to understand human speech sufficiently enough to solve word problems.

The 1970s led to the development of natural language processing, or the ability for computers to interpret human speech. The 1980s led to some more developments in the field, but it wasn’t until the 1990s when major advances were made. Advances in the fields of machine learning, intelligent tutoring, case-based reasoning, uncertain reasoning, natural language processing, as well as language translation all contributed to the further development of AI. In the 2000s, advances in robots were accomplished and put to use towards scientific investigation, car racing, and children’s toys. Today, natural language processing and dictation begin to play a role in the advancement of mobile technology.

Problems

The main question to be asked is, “Can computers act and think like humans?” Are they able to reason deductively, solve complex problems, answer questions, learn new material, or process human speech?

Deductive Reasoning

While humans are able to deductively reason almost subconsciously by using their intuition, computers could only do so for a while using step-by-step algorithms. This was a good solution until problems became too complex. A lot of research is currently being devoted to making machines reason faster using more complex algorithms.

Learning

Learning is intuitive to humans, but not to computers. Machine learning dates back to the first artificial intelligence conference, in 1956. It involves the ability to recognize patterns, adapt to them, and use that knowledge to help solve future problems. This can be very helpful in processing natural language.

Natural Language Processing

Natural language processing is the ability for computers to listen to, interpret, and understand human speech. This has applications in many fields, including virtual personal assistants, information retrieval, and data mining.

Examples

IBM’s Watson

Recently, IBM developed a computer they named “Watson,” which is a question and answering machine that defeated the world’s best Jeopardy player. The computer received the clues in text format, and was able to formulate and verbalize an answer. Not only was it able to solve questions with direct clues, it was also able to decipher aspects of language unique to humans, such as sarcasm, puns, and metaphors.

Siri

Siri is a company that was bought by Apple in April of 2010. Then, it was an application for mobile devices that acted as a virtual personal assistant, able to answer questions, schedule meetings, get directions, as well as a variety of other tasks. In addition to Apple’s recent announcement of the iPhone 4S, they have also announced that Siri will act as a personal assistant within the new iPhone. It enables the user to use his or her voice to send text messages, schedule meetings, make phone calls, and set reminders. Not only does understand what you say, it is also able to interpret the meaning behind what you are saying. For example, you could say “Tell my brother that I am running 20 minutes late.” Siri will understand this and send a text message to your brother stating you will be 20 minutes late. You could also ask Siri, “Am I going to need a raincoat today,” and it will respond by pulling up the latest weather forecast. It is also able to dictate human speech, so you could send a text message simply by speaking it. This could change the way humans interact with their mobile devices, and has the potential to create a whole new field of computing: virtual personal assistants.

Ethics of Artificial Intelligence

One question that could be asked is, “If computers are able to think like humans, understand humans, and react to humans, do they deserve the same rights as humans?” While some might think that this is a silly question, it is actually a pertinent one that must be taken into account when designing artificial beings.


Another issue has to do with privacy. If computers are able understand human speech and thought, then with the right program computers could interpret phone conversations, text messages, emails, as well as other documents. It would be able to identify which people are having a conversation, as well as the details of that conversation. This could enable the user to do a wide variety of things with this information, ranging from identity theft to insider trading.

Another factor that must be taken into account is that artificial intelligence has the potential to replace humans in multiple respects. AI is already implemented in some phone operating systems, acting as customer service representatives to help customers troubleshoot. AI could replace other positions that humans currently occupy, such as bank tellers or therapists. Some of the positions that AI has the potential to fill require human feelings of empathy. If such positions are filled with computers they will lose this sense of empathy, which could have negative repercussions on the people using them.