Artificial SuperIntelligence

From SI410
Revision as of 00:59, 28 April 2019 by Tbourani (Talk | contribs) (Bias and AI Risk)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Artificial SuperIntelligence

An artificial superintelligence is a hypothetical subset of artificial general intelligence, that would greatly exceed human levels of performance in all cognitive domains. It is distinct from currently existing narrow artificial intelligences, which are only able to demonstrate superhuman performance in a particular domain, such as IBM Watson's ability to answer questions or AlphaGo's ability to play Go. Philosophers speculate on whether artificial superintelligence would be conscious and have subjective experiences; unlike narrow artificial intelligence, all forms of artificial general intelligence would be able to pass the Turing Test, including superintelligences. Ethical issues related to artificial superintelligence include the moral weight of artificial agents, existential risk, and problems of bias and value alignment.

Timeline

Experts are mixed regarding the timeline of the development of artificial superintelligence. A poll of AI researchers in 2013 found that the median prediction was a 50% chance artificial general intelligence would be developed by 2050, and a 90% chance artificial general intelligence would be developed by 2075. The median estimate of the time from artificial general intelligence to artificial superintelligence was 30 years.[1]

Opinions regarding the time from the development of artificial general intelligence to artificial superintelligence are roughly split into two camps: soft takeoff and hard takeoff. Soft takeoff posits that there will be a great deal of time between the development of artificial general intelligence and artificial superintelligence, on the other of years or decades. On the other hand, hard takeoff suggests that artificial general intelligence will rapidly develop into superintelligence, on the order of months, days or even minutes. This is generally considered possible through recursive self-improvement, first conceived of by mathematician I.J. Good.[2] As Good and others predict, an artificial general intelligence with expert-level performance in the domain of programming might be able to modify it's own source code, producing a version of itself that was slightly smarter and more capable. It could then self-modify further, undergoing an intelligence explosion, with each iteration becoming more and more intelligent, until it had reached the level of superintelligence.

Applications

Philosopher Nick Bostrom claims that an artificial superintelligence could be designed for 3 different types of use. The first use is as a question answering service, which Bostrom refers to as an "oracle". This service would be similar to Google, but capable of answering much more complicated questions, while pulling from a wider set of knowledge domains than merely information retrieval. Questions that humans could not easily answer, such as difficult mathematical proofs or scientific research questions, statements of fact like "Is there life outside our galaxy?", or open-ended philosophical or technical design problems, might be significantly easier for an artificial superintelligence.[3]

The second class of artificial superintelligences are agents designed to carry out a specific task, then wait for the next one, which Bostrom calls "genies". Unlike modern computers, which must have their instructions explicitly formulated in the form of code, and carry out only what the user tells them to, an artificial superintelligence could receive instructions in abstract natural language, and pursue the task in a goal-oriented manner, rather than according to a preconfigured procedure.[3] Examples of such tasks might include "use stem cells to cure cancer" or "design a building to these specifications."

The third type of artificial superintelligences are agents with a particular goal, which they take strategic actions to pursue independently. Bostrom classifies these agents as "sovereigns". Such an agent would have significant autonomy and the ability to acquire and apply resources to pursuing its goals and subgoals without human intervention.[3] These goals could be wide-reaching and ambitious, like "grow GDP by 10% every year" or "maximize human flourishing". Eliezer Yudkowsky, an expert in AI, states that “There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards in intelligence, and some problems will suddenly move from “impossible” to “obvious.” Move a substantial degree upwards, and all of them will become obvious.”[4]

Additionally, many experts believe that ASI will help us become immortal. The machines will find cures for the deadliest diseases, solve environmental destruction, help humans cure hunger, and combine with biotech to create anti-aging solutions that prevent people from dying. Furthermore, ASI will lead the human race to live in a manner that is experientially better than its current state. Bostrom states that the implementation of a superintelligence will help humans generate opportunities to increase their utility through intellectual and emotional avenues. It will help generate a world that is much more appealing than the current one. Through the assistance of a superintelligence, humans will devote their lives to more enjoyable activities, like game-playing, developing human relations, and living closer to their ideals [citation needed] .

Ethical Implications

First Mover Advantages

A future after an artificial superintelligence has undergone an intelligence explosion is known as a "post-singularity" future, and by its nature may be impossible to predict. Bostrom claims that as machines get smarter, they don’t just get score better on intelligence exams, but instead grow more capable of accomplishing their goals by exerting their will upon the world. Bostrom breaks these abilities down into several categories, at which artificial superintelligences may be significantly better than humans and therefore better able to accomplish their goals than any human. These include cognitive self-improvement, which would allows the agent to make itself even more intelligent; strategizing, which would allow it to better achieve distant goals and overcome intelligent opposition; social manipulation, which would allow it to enlist human assistance and persuade states and other organizational actors; technological research, which would allow it to explore and realize alternative strategies; and economic productivity, which would allow it to acquire more resources to further its goals.[3]

As a result of all this, an artificial superintelligence would be more capable of accomplishing its goals than any human, so whoever dictated these goals would have a possibly insurmountable first-mover advantage. Furthermore, this artificial superintelligence could also be used to suppress the development of any competing artificial superintelligences, as these would likely compete with the original for resources to accomplish its goals. As such, the race to develop the first artificial superintelligence is in fact the race for total influential superiority.[3] Groups competing for this title will likely consist of governments, tech firms, and black-market groups [citation needed] . Depending on who solves the problem first, it could prove consequential, especially if the artificial superintelligence is developed by a ethically unscrupulous group. In this case, ASI could be harmful.

Applied Ethics

Additionally, if ASI can help humans become immortal, is this ethical? In a sense, ASI and humans would be playing the hand of God, which leaves people divided about the idea. Furthermore, there are serious implications if people become immortal. In a world where there is a no death rate, but still a birth rate, a serious impact on our living conditions and other species around us could occur. Could this lead to overpopulation? Or would ASI provide a solution to this? What if ASI concludes that certain humans/species pose a threat to society as a whole, and its solution is to eliminate a certain group of humans/species? What if we don’t like the answers that ASI has for humanity?

Bias and AI Risk

An artificial superintelligence could also face issues with bias. Because any artificial superintelligence would need to be programmed by humans initially, it is extremely unlikely that one would be created without some level of bias. Almost all human records, including medical, housing, criminal, historical, and educational, have some degree of bias against minorities [citation needed] . This is caused by human flaws and failures, and the question remains if an artificial superintelligence would act to further these shortcomings or fix humanity's mistakes. As stated, experts don't know what to expect from an artificial superintelligence. Nonetheless, humans will endow any future artificial superintelligence with a vast library of bias-filled information that is present both today and in the past. While we don't know how an artificial superintelligence would act on this knowledge, it could be motivation to promote biased actions.

The biggest source of bias in the actualization of artificial superintelligence, and one with potentially existential consequences for humanity, is the specific nature of a superintelligent agent's goal. In principle, this bias is desirable, as the artificial superintelligence must have some goal to pursue, otherwise neither it nor we may benefit from its superintelligence being put to any purpose. Furthermore, this bias is also necessary: by the orthogonality thesis, "intelligence and final goals are orthogonal axes along which possible agents may freely vary".[5] Therefore, since the selection of a goal is arbitrary to any level of intelligence, a human designer must ultimately make a biased decision as to what goal the artificial superintelligence will pursue. However, for several important reasons, this decision supremely important from an ethical perspective, and could dictate the fate of the entire human species.

First, as mentioned previously, the ability of an artificial superintelligence to pursue its goals would be unmatched by any human, just as a humanity's ability to pursue its goals cannot be circumvented by an ant or a beetle. In the event that the artificial superintelligence's goal was actively harmful to humans, such as a military agent, the results for human life would be disastrous. However, an artificial superintelligence's goal need not be explicitly malicious to have similar catastrophic effects on human life and wellbeing. Even an agent with a goal completely neutral regarding humans poses an existential threat to humanity. This is due to instrumental convergence, which is the tendency of goal-seeking agents to converge on specific subgoals regardless of the nature of their terminal goals. These are described by computer scientist Steve Omohundro as self-preservation, meaning that an agent will attempt to secure its ability to continue existing, so as to ensure its goals are accomplished; goal-content integrity, meaning that an agent will protect itself against having its goals changed; resource acquisition, meaning that an agent will seek out more resources to better accomplish its goals; cognitive enhancement, meaning an agent will attempt to improve itself to better accomplish its goals; and technological perfection, meaning the research of new technology to better or more efficiently accomplish its goals.[6][3]

An artificial agent that is neutral towards humans, or doesn't consider them an important part of their goals, will very likely still pose an existential threat to humans because of its instrumental goals. For example, an artificial superintelligence might launch a preemptive first strike against humans, calculating that we might be a threat to its future self-preservation or that we might try to change its goals to be more human-friendly. Alternatively, consider Bostrom's thought experiment regarding an artificial superintelligence whose only goal is the production of more paperclips. As the superintelligence has no care for humans in its goal, it has no problem with breaking down first the entire Earth, and then the entire universe into raw materials to make more paperclips out of.[3] Consider the analogous case of a human construction project that destroys an anthill during construction; it's not that the construction workers hate the ants, they just don't care about them, and they are in the way of accomplishing a human goal. As AI researcher Eliezer Yudkowsky puts it, "The AI neither hates you, nor loves you, but you are made out of atoms that it can use for something else."[7]

On the other hand, Bostrom posits a future in which artificial superintelligence does have a human-aligned goal, a so-called Friendly AI, and is able to assist humanity in claiming its "cosmic endowment." In such a scenario, human flourishing could spread across the cosmos, reaching between 6x10^18 - 2x10^20 stars with relativistic probes before cosmic expansion puts the rest of the universe beyond our reach, providing habitats and resources for between 10^35 - 10^54 human beings. What's at stake in choosing the correct goal for an artificial superintelligence is not merely the extinction of the human species, but the potential happiness of 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 human lives spread among the stars.[3]

Media

Lucy (film)

Lucy

Lucy is a 2013 science-fiction drama film starring Scarlet Johansson. The plot follows Lucy, a woman who gains telekinetic abilities following her exposure to a cognitive enhancement drug. Immediately following her exposure to the drug, Lucy begins to gain enhanced mental and physical abilities such as telepathy and telekinesis. In order to prevent her body from disintegrating due to cellular degeneration, Lucy must continue taking more of the drug. The additional doses work to further increase Lucy's cerebral capacity well beyond that of a normal human being, which gifts her with telekinetic and time-travel abilities. Her emotions are dulled and she grows more stoic and robotic. Her body begins to change into a black, nanomachine type substance that spreads over the computers and electronic objects in her lab. Eventually, Lucy transforms into a complex supercomputer. She becomes an all-knowing entity, far beyond the intellectual capacity of any human being. She eventually reaches 100% of her cerebral capacity and transcends this plane of existence and enters into the spacetime continuum. She leaves behind all of her knowledge on a superintelligent flash drive so that humans may learn from all of her knowledge and insight about the universe[8].

The story of Lucy can be likened to the concept of artificial superintelligence. Lucy is transformed into an all-knowing supercomputer with intelligence much greater than any human. Although she is not artificial, but rather a superintelligent human, she gains the ability to create solutions to problems that are unfathomable to the human mind. In a sense, Lucy loses her humanity and evolves into a machine with an intellect that is smarter than that of any human.

References

  1. Müller, Vincent C.; Bostrom, Nick (2016). "Future Progress in Artificial Intelligence: A Survey of Expert Opinion". Fundamental Issues of Artificial Intelligence. 376: 555-572. doi:10.1007/978-3-319-26485-1_33. Retrieved April 27, 2019.
  2. Good, I. J. (1965). "Speculations Concerning the First Ultraintelligent Machine". Advances in Computers. 6: 31-88. doi:10.1016/S0065-2458(08)60418-0. ISBN 9780120121069. Retrieved April 27, 2019.
  3. 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies'. Oxford University Press. ISBN 978-0199678112
  4. Yudkowsky, Elizer (1996). "2.1: Smarter than We Are". Staring into the Singularity.
  5. Armstrong, Stuart (January 2013). "General Purpose Intelligence: Arguing the Orthogonality Thesis". Analysis and Metaphysics. 12: 68-84. Retrieved April 27, 2019.
  6. Omohundro, Stephen M. (2007). "The Nature of Self-Improving Artificial Intelligence". Singularity Summit 2007'. San Francisco. Retrieved April 27, 2019.
  7. Yudkowsky, Eliezer (2008). "Artificial intelligence as a positive and negative factor in global risk." Global Catastrophic Risks. Oxford University Press. pp. 303-333. Retrieved April 27, 2019.
  8. retrieved April 20, 2019., https://www.imdb.com/title/tt2872732/
  1. https://nickbostrom.com/superintelligence.html
  2. The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil
  3. http://yudkowsky.net/