Artificial SuperIntelligence

From SI410
Revision as of 23:36, 27 April 2019 by Tbourani (Talk | contribs) (Applications)

Jump to: navigation, search
Artificial SuperIntelligence

An artificial superintelligence is a hypothetical subset of artificial general intelligence, that would greatly exceed human levels of performance in all cognitive domains. It is distinct from currently existing narrow artificial intelligences, which are only able to demonstrate superhuman performance in a particular domain, such as IBM Watson's ability to answer questions or AlphaGo's ability to play Go. Philosophers speculate on whether artificial superintelligence would be conscious and have subjective experiences; unlike narrow artificial intelligence, all forms of artificial general intelligence would be able to pass the Turing Test, including superintelligences. Ethical issues related to artificial superintelligence include the moral weight of artificial agents, existential risk, and problems of bias and value alignment.

Timeline

Experts are mixed regarding the timeline of the development of artificial superintelligence. A poll of AI researchers in 2013 found that the median prediction was a 50% chance artificial general intelligence would be developed by 2050, and a 90% chance artificial general intelligence would be developed by 2075. The median estimate of the time from artificial general intelligence to artificial superintelligence was 30 years.[1]

Opinions regarding the time from the development of artificial general intelligence to artificial superintelligence are roughly split into two camps: soft takeoff and hard takeoff. Soft takeoff posits that there will be a great deal of time between the development of artificial general intelligence and artificial superintelligence, on the other of years or decades. On the other hand, hard takeoff suggests that artificial general intelligence will rapidly develop into superintelligence, on the order of months, days or even minutes. This is generally considered possible through recursive self-improvement, first conceived of by mathematician I.J. Good.[2] As Good and others predict, an artificial general intelligence with expert-level performance in the domain of programming might be able to modify it's own source code, producing a version of itself that was slightly smarter and more capable. It could then self-modify further, undergoing an intelligence explosion, with each iteration becoming more and more intelligent, until it had reached the level of superintelligence.

Applications

Philosopher Nick Bostrom claims that an artificial superintelligence could be designed for 3 different types of use. The first use is as a question answering service, which Bostrom refers to as an "oracle". This service would be similar to Google, but capable of answering much more complicated questions, while pulling from a wider set of knowledge domains than merely information retrieval. Questions that humans could not easily answer, such as difficult mathematical proofs or scientific research questions, statements of fact like "Is there life outside our galaxy?", or open-ended philosophical or technical design problems, might be significantly easier for an artificial superintelligence.[3]

The second class of artificial superintelligences are agents designed to carry out a specific task, then wait for the next one, which Bostrom calls "genies". Unlike modern computers, which must have their instructions explicitly formulated in the form of code, and carry out only what the user tells them to, an artificial superintelligence could receive instructions in abstract natural language, and pursue the task in a goal-oriented manner, rather than according to a preconfigured procedure.[3] Examples of such tasks might include "use stem cells to cure cancer" or "design a building to these specifications."

The third type of artificial superintelligences are agents with a particular goal, which they take strategic actions to pursue independently. Bostrom classifies these agents as "sovereigns". Such an agent would have significant autonomy and the ability to acquire and apply resources to pursuing its goals and subgoals without human intervention.[3] These goals could be wide-reaching and ambitious, like "grow GDP by 10% every year" or "maximize human flourishing". Eliezer Yudkowsky, an expert in AI, states that “There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards in intelligence, and some problems will suddenly move from “impossible” to “obvious.” Move a substantial degree upwards, and all of them will become obvious.”[4]

Additionally, many experts believe that ASI will help us become immortal. The machines will find cures for the deadliest diseases, solve environmental destruction, help humans cure hunger, and combine with biotech to create anti-aging solutions that prevent people from dying. Furthermore, ASI will lead the human race to live in a manner that is experientially better than its current state. Bostrom states that the implementation of a superintelligence will help humans generate opportunities to increase their utility through intellectual and emotional avenues. It will help generate a world that is much more appealing than the current one. Through the assistance of a superintelligence, humans will devote their lives to more enjoyable activities, like game-playing, developing human relations, and living closer to their ideals [citation needed] .

Ethical Implications

First Mover Advantages

A future after an artificial superintelligence has undergone an intelligence explosion is known as a "post-singularity" future, and by its nature may be impossible to predict. Bostrom claims that as machines get smarter, they don’t just get score better on intelligence exams, but instead grow more capable of accomplishing their goals by exerting their will upon the world. Bostrom breaks these abilities down into several categories, at which artificial superintelligences may be significantly better than humans and therefore better able to accomplish their goals than any human. These include cognitive self-improvement, which would allows the agent to make itself even more intelligent; strategizing, which would allow it to better achieve distant goals and overcome intelligent opposition; social manipulation, which would allow it to enlist human assistance and persuade states and other organizational actors; technological research, which would allow it to explore and realize alternative strategies; and economic productivity, which would allow it to acquire more resources to further its goals.[3]

As a result of all this, an artificial superintelligence would be more capable of accomplishing its goals than any human, so whoever dictated these goals would have a possibly insurmountable first-mover advantage. Furthermore, this artificial superintelligence could also be used to suppress the development of any competing artificial superintelligences, as these would likely compete with the original for resources to accomplish its goals. As such, the race to develop the first artificial superintelligence is in fact the race for total influential superiority.[3] Groups competing for this title will likely consist of governments, tech firms, and black-market groups [citation needed] . Depending on who solves the problem first, it could prove consequential, especially if the artificial superintelligence is developed by a ethically unscrupulous group. In this case, ASI could be harmful.

Applied Ethics

Additionally, if ASI can help humans become immortal, is this ethical? In a sense, ASI and humans would be playing the hand of God, which leaves people divided about the idea. Furthermore, there are serious implications if people become immortal. In a world where there is a no death rate, but still a birth rate, a serious impact on our living conditions and other species around us could occur. Could this lead to overpopulation? Or would ASI provide a solution to this? What if ASI concludes that certain humans/species pose a threat to society as a whole, and its solution is to eliminate a certain group of humans/species? What if we don’t like the answers that ASI has for humanity?

Bias and AI Risk

ASI could also face issues with bias. Because any ASI would need to be programmed by humans initially, it is extremely unlikely that one would be created without some level of bias. Almost all human records, including medical, housing, criminal, historical, and educational, have some degree of bias against minorities. This is caused by human flaws and failures, and the question remains if an ASI would act to further these shortcomings or fix humanity's mistakes. As stated, experts don't know what to expect from an ASI. Nonetheless, humans will give a vast library of bias filled information that is present both today and in the past. While we don't know how an ASI would act on this knowledge, it could be motivation to promote biased actions.

Media

Lucy (film)

Lucy

Lucy is a 2013 science-fiction drama film starring Scarlet Johansson. The plot follows Lucy, a woman who gains telekinetic abilities following her exposure to a cognitive enhancement drug. Immediately following her exposure to the drug, Lucy begins to gain enhanced mental and physical abilities such as telepathy and telekinesis. In order to prevent her body from disintegrating due to cellular degeneration, Lucy must continue taking more of the drug. The additional doses work to further increase Lucy's cerebral capacity well beyond that of a normal human being, which gifts her with telekinetic and time-travel abilities. Her emotions are dulled and she grows more stoic and robotic. Her body begins to change into a black, nanomachine type substance that spreads over the computers and electronic objects in her lab. Eventually, Lucy transforms into a complex supercomputer. She becomes an all-knowing entity, far beyond the intellectual capacity of any human being. She eventually reaches 100% of her cerebral capacity and transcends this plane of existence and enters into the spacetime continuum. She leaves behind all of her knowledge on a superintelligent flash drive so that humans may learn from all of her knowledge and insight about the universe[5].

The story of Lucy can be likened to the concept of artificial superintelligence. Lucy is transformed into an all-knowing supercomputer with intelligence much greater than any human. Although she is not artificial, but rather a superintelligent human, she gains the ability to create solutions to problems that are unfathomable to the human mind. In a sense, Lucy loses her humanity and evolves into a machine with an intellect that is smarter than that of any human.

References

  1. Müller, Vincent C.; Bostrom, Nick (2016). "Future Progress in Artificial Intelligence: A Survey of Expert Opinion". Fundamental Issues of Artificial Intelligence. 376: 555-572. doi:10.1007/978-3-319-26485-1_33. Retrieved April 27, 2019.
  2. Good, I. J. (1965). "Speculations Concerning the First Ultraintelligent Machine". Advances in Computers. 6: 31-88. doi:10.1016/S0065-2458(08)60418-0. ISBN 9780120121069. Retrieved April 27, 2019.
  3. 3.0 3.1 3.2 3.3 3.4 Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies'. Oxford University Press. ISBN 978-0199678112
  4. Yudkowsky, Elizer (1996). "2.1: Smarter than We Are". Staring into the Singularity.
  5. retrieved April 20, 2019., https://www.imdb.com/title/tt2872732/
  1. https://nickbostrom.com/superintelligence.html
  2. The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil
  3. http://yudkowsky.net/