Difference between revisions of "Artificial SuperIntelligence"

From SI410
Jump to: navigation, search
Line 23: Line 23:
 
= References =  
 
= References =  
 
<references/>
 
<references/>
https://nickbostrom.com/superintelligence.html
+
# https://nickbostrom.com/superintelligence.html
 
+
# The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil
The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil
+
# http://yudkowsky.net/
 
+
http://yudkowsky.net/
+

Revision as of 18:58, 16 March 2018

Back • ↑Topics • ↑Categories

Artificial SuperIntelligence
Asi.jpg
NoImage.png
caption [url text]
Type type
Launch Date date
Status active
Product Line product
Platform platform
Website [site text]
A
rtificial Super Intelligence is 1 of the 3 subsections of the overarching term artificial intelligence (See artificial narrow intelligence (ANI) and artificial general intelligence (AGI)). ASI is the most advanced of the three categories of AI and has, at this point in time, not been successfully created. Nick Bostrom, a leading AI figure, defines ASI as, “an intellect that is much smarter than the best human brains in practically every field.”, including scientific creativity, general wisdom and social skill. He then describes what it could manifest itself as, ranging from a digital computer, an ensemble of networked computers, or even cultured cortical tissue. It also leaves open whether the superintelligence is conscious and has subjective experiences. While ANI and AGI cannot pass the Turing Test, a famous test that examines a machine’s ability to produce intelligent behavior indistinguishable from a human, ASI would be able to pass this test. ASI represents a form of intelligence that would be smarter than humans across a wide range, or all, across all applications of human intelligence.

Timeline

ASI will not only perform better than us, but it would be able to process significantly faster than humans would. Experts are very divided regarding the timeline of the development of ASI Some believe that the growth will occur at an exponential rate. While growth right now might seem slow, as deep learning continues to grow, machines will continue to increase in rate of growth. The median prediction among experts for the arrival of ASI is 2060. There are milestones that technology needs to hit before ASI. The first must be the transition from ANI to AGI. The median prediction from experts on the arrival of AGI is 2045. From there, AGI must transform into ASI.

Applications

Nick Bostrom claims that ASI could manifest itself for 3 subsections of uses. The first use is as a question answering service. This would be a machine that you could ask questions to. Think of Google, but for much more complicated questions, that humans would likely not be able to answer. A question could look like this, “How can we create a cure for HIV?”. The second being as a function that creates tangible solutions to problems. An example of this would be asking the ASI device, “use stem cells to create a solution to solving cancer”. The third function it could do is acting on its own to solve problems. On its own, it could recognize that many humans die to heart attacks, and it could come up with a unique solution, that humans would not have thought of prior. Elieze Yudkowsky, an expert in AI, states that, “There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards in intelligence, and some problems will suddenly move from “impossible” to “obvious.” Move a substantial degree upwards, and all of them will become obvious.” Additionally, many experts believe that ASI will help us become immortal. The machines would find cures for the deadliest diseases, solve environmental destruction, help humans cure hunger, and combine with biotech to create anti-aging solutions to prevent us from dying. Furthermore, ASI would lead us to lives that are experientially better than our current lives. Bostrom states that the implementation of a superintelligence would help us generate opportunities to increase our utility through intellectual and emotional avenues. It would help generate a world that is much more appealing than our previous lives. We would devote our lives to more enjoyable things, like game-playing, developing human relations, and living closer to our ideals.

Ethical implications

Experts have concluded that they don’t know what the result of a ASI world would be. Bostrom claims that as machines get smarter, they don’t just get score well on intelligence exams, rather they gain superpowers. The machines will be able to help itself become even smarter than it previously was. They will be ability to be persuasive through social manipulation. They will also be able to prioritize tasks. They will be strategizing things like long term goals, and step by step how to accomplish them in the short term. It must be noted that ASI will be significantly more developed then humans in these areas. There will be race among different groups to accomplish ASI superiority. These groups will likely consist of governments, tech firms, and black-market groups. Depending on who solves the problem first, it could prove consequential. This will be revolutionary technology, and Bostrom believes that the first group to develop ASI will have a strategic advantage over any successors, as the first mover will have an advantage because it would be far enough ahead to oppress other ASI’s as they come about.

Additionally, if ASI can help humans become immortal, is this ethical? In a sense, ASI and humans would be playing the hand of God, which leaves people divided about the idea. Furthermore, there are serious implications if people become immortal. In a world where there is a no death rate, but still a birth rate, a serious impact on our living conditions and other species around us could occur. Could this lead to overpopulation? Or would ASI provide a solution to this? What if ASI concludes that certain humans / species pose a threat to society as a whole, and its solution is to eliminate a certain group of humans / species? What if we don’t like the answers that ASI has for humanity?

References

  1. https://nickbostrom.com/superintelligence.html
  2. The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil
  3. http://yudkowsky.net/