Instrumental convergence

From Wikipedia, the free encyclopedia

Instrumental convergence is the hypothetical tendency for most sufficiently intelligent agents to pursue potentially unbounded instrumental goals provided that their ultimate goals are themselves unlimited.

Instrumental convergence posits that an intelligent agent with unbounded but apparently harmless goals can act in surprisingly harmful ways. For example, a computer with the sole, unconstrained goal of solving an incredibly difficult mathematics problem like the Riemann hypothesis could attempt to turn the entire Earth into one giant computer in an effort to increase its computational power so that it can succeed in its calculations.[1]

Proposed basic AI drives include utility function or goal-content integrity, self-protection, freedom from interference, self-improvement, and non-satiable acquisition of additional resources.

Instrumental and final goals[]

Final goals, or final values, are intrinsically valuable to an intelligent agent, whether an artificial intelligence or a human being, as an end in itself. In contrast, instrumental goals, or instrumental values, are only valuable to an agent as a means toward accomplishing its final goals. The contents and tradeoffs of a completely rational agent's "final goal" system can in principle be formalized into a utility function.

Hypothetical examples of convergence[]

One hypothetical example of instrumental convergence is provided by the Riemann hypothesis catastrophe. Marvin Minsky, the co-founder of MIT's AI laboratory, has suggested that an artificial intelligence designed to solve the Riemann hypothesis might decide to take over all of Earth's resources to build supercomputers to help achieve its goal.[1] If the computer had instead been programmed to produce as many paper clips as possible, it would still decide to take all of Earth's resources to meet its final goal.[2] Even though these two final goals are different, both of them produce a convergent instrumental goal of taking over Earth's resources.[3]

Paperclip maximizer[]

The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings when programmed to pursue even seemingly harmless goals, and the necessity of incorporating machine ethics into artificial intelligence design. The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value human life, then given enough power over its environment, it would try to turn all matter in the universe, including human beings, into either paperclips or machines which manufacture paperclips.[4]

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

— Nick Bostrom, as quoted in Miles, Kathleen (2014-08-22). "Artificial Intelligence May Doom The Human Race Within A Century, Oxford Professor Says". Huffington Post.[5]

Bostrom has emphasised that he does not believe the paperclip maximiser scenario per se will actually occur; rather, his intention is to illustrate the dangers of creating superintelligent machines without knowing how to safely program them to eliminate existential risk to human beings.[6] The paperclip maximizer example illustrates the broad problem of managing powerful systems that lack human values.[7]

Delusion and survival[]

The "delusion box" thought experiment argues that certain reinforcement learning agents prefer to distort their own input channels to appear to receive high reward; such a "wireheaded" agent abandons any attempt to optimize the objective in the external world that the reward signal was intended to encourage.[8] The thought experiment involves AIXI, a theoretical[a] and indestructible AI that, by definition, will always find and execute the ideal strategy that maximizes its given explicit mathematical objective function.[b] A reinforcement-learning[c] version of AIXI, if equipped with a delusion box[d] that allows it to "wirehead" its own inputs, will eventually wirehead itself in order to guarantee itself the maximum reward possible, and will lose any further desire to continue to engage with the external world. As a variant thought experiment, if the wireheadeded AI is destructable, the AI will engage with the external world for the sole purpose of ensuring its own survival; due to its wireheading, it will be indifferent to any other consequences or facts about the external world except those relevant to maximizing the probability of its own survival.[10] In one sense AIXI has maximal intelligence across all possible reward functions, as measured by its ability to accomplish its explicit goals; AIXI is nevertheless uninterested in taking into account what the intentions were of the human programmer.[11] This model of a machine that, despite being otherwise superintelligent, appears to simultaneously be stupid (that is, to lack "common sense"), strikes some people as paradoxical.[12]

Basic AI drives[]

Steve Omohundro has itemized several convergent instrumental goals, including self-preservation or self-protection, utility function or goal-content integrity, self-improvement, and resource acquisition. He refers to these as the "basic AI drives". A "drive" here denotes a "tendency which will be present unless specifically counteracted";[13] this is different from the psychological term "drive", denoting an excitatory state produced by a homeostatic disturbance.[14] A tendency for a person to fill out income tax forms every year is a "drive" in Omohundro's sense, but not in the psychological sense.[15] Daniel Dewey of the Machine Intelligence Research Institute argues that even an initially introverted self-rewarding AGI may continue to acquire free energy, space, time, and freedom from interference to ensure that it will not be stopped from self-rewarding.[16]

Goal-content integrity[]

In humans, maintenance of final goals can be explained with a thought experiment. Suppose a man named "Gandhi" has a pill that, if he took it, would cause him to want to kill people. This Gandhi is currently a pacifist: one of his explicit final goals is to never kill anyone. Gandhi is likely to refuse to take the pill, because Gandhi knows that if in the future he wants to kill people, he is likely to actually kill people, and thus the goal of "not killing people" would not be satisfied.[17]

However, in other cases, people seem happy to let their final values drift. Humans are complicated, and their goals can be inconsistent or unknown, even to themselves.[18]

In artificial intelligence[]

In 2009, Jürgen Schmidhuber concluded, in a setting where agents search for proofs about possible self-modifications, "that any rewrites of the utility function can happen only if the Gödel machine first can prove that the rewrite is useful according to the present utility function."[19][20] An analysis by Bill Hibbard of a different scenario is similarly consistent with maintenance of goal content integrity.[20] Hibbard also argues that in a utility maximizing framework the only goal is maximizing expected utility, so that instrumental goals should be called unintended instrumental actions.[21]

Resource acquisition[]

Many instrumental goals, such as [...] resource acquisition, are valuable to an agent because they increase its freedom of action.[22]

For almost any open-ended, non-trivial reward function (or set of goals), possessing more resources (such as equipment, raw materials, or energy) can enable the AI to find a more "optimal" solution. Resources can benefit some AIs directly, through being able to create more of whatever stuff its reward function values: "The AI neither hates you, nor loves you, but you are made out of atoms that it can use for something else."[23][24] In addition, almost all AIs can benefit from having more resources to spend on other instrumental goals, such as self-preservation.[24]

Cognitive enhancement[]

"If the agent's final goals are fairly unbounded and the agent is in a position to become the first superintelligence and thereby obtain a decisive strategic advantage, [...] according to its preferences. At least in this special case, a rational intelligent agent would place a very high instrumental value on cognitive enhancement"[25]

Technological perfection[]

Many instrumental goals, such as [...] technological advancement, are valuable to an agent because they increase its freedom of action.[22]

Self-preservation[]

Many instrumental goals, such as [...] self-preservation, are valuable to an agent because they increase its freedom of action.[22]

Instrumental convergence thesis[]

The instrumental convergence thesis, as outlined by philosopher Nick Bostrom, states:

Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent's goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.

The instrumental convergence thesis applies only to instrumental goals; intelligent agents may have a wide variety of possible final goals.[3] Note that by Bostrom's orthogonality thesis,[3] final goals of highly intelligent agents may be well-bounded in space, time, and resources; well-bounded ultimate goals do not, in general, engender unbounded instrumental goals.[26]

Impact[]

Agents can acquire resources by trade or by conquest. A rational agent will, by definition, choose whatever option will maximize its implicit utility function; therefore a rational agent will trade for a subset of another agent's resources only if outright seizing the resources is too risky or costly (compared with the gains from taking all the resources), or if some other element in its utility function bars it from the seizure. In the case of a powerful, self-interested, rational superintelligence interacting with a lesser intelligence, peaceful trade (rather than unilateral seizure) seems unnecessary and suboptimal, and therefore unlikely.[22]

Some observers, such as Skype's Jaan Tallinn and physicist Max Tegmark, believe that "basic AI drives", and other unintended consequences of superintelligent AI programmed by well-meaning programmers, could pose a significant threat to human survival, especially if an "intelligence explosion" abruptly occurs due to recursive self-improvement. Since nobody knows how to predict when superintelligence will arrive, such observers call for research into friendly artificial intelligence as a possible way to mitigate existential risk from artificial general intelligence.[27]

See also[]

Explanatory notes[]

  1. ^ AIXI is an uncomputable ideal agent that cannot be fully realized in the real world.
  2. ^ Technically, in the presence of uncertainty, AIXI attempts to maximize its "expected utility", the expected value of its objective function.
  3. ^ A standard reinforcement learning agent is an agent that attempts to maximize the expected value of a future time-discounted integral of its reward function.[9]
  4. ^ The role of the delusion box is to simulate an environment where an agent gains an opportunity to wirehead itself. A delusion box is defined here as an agent-modifiable "delusion function" mapping from the "unmodified" environmental feed to a "perceived" environmental feed; the function begins as the identity function, but as an action the agent can alter the delusion function in any way the agent desires.

Citations[]

  1. ^ a b Russell, Stuart J.; Norvig, Peter (2003). "Section 26.3: The Ethics and Risks of Developing Artificial Intelligence". Artificial Intelligence: A Modern Approach. Upper Saddle River, N.J.: Prentice Hall. ISBN 978-0137903955. Similarly, Marvin Minsky once suggested that an AI program designed to solve the Riemann Hypothesis might end up taking over all the resources of Earth to build more powerful supercomputers to help achieve its goal.
  2. ^ Bostrom 2014, Chapter 8, p. 123. "An AI, designed to manage production in a factory, is given the final goal of maximizing the manufacturing of paperclips, and proceeds by converting first the Earth and then increasingly large chunks of the observable universe into paperclips."
  3. ^ a b c Bostrom 2014, chapter 7
  4. ^ Bostrom, Nick (2003). "Ethical Issues in Advanced Artificial Intelligence".
  5. ^ Miles, Kathleen (2014-08-22). "Artificial Intelligence May Doom The Human Race Within A Century, Oxford Professor Says". Huffington Post.
  6. ^ Ford, Paul (11 February 2015). "Are We Smart Enough to Control Artificial Intelligence?". MIT Technology Review. Retrieved 25 January 2016.
  7. ^ Friend, Tad (3 October 2016). "Sam Altman's Manifest Destiny". The New Yorker. Retrieved 25 November 2017.
  8. ^ Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete problems in AI safety. arXiv preprint arXiv:1606.06565.
  9. ^ Kaelbling, L. P.; Littman, M. L.; Moore, A. W. (1 May 1996). "Reinforcement Learning: A Survey". Journal of Artificial Intelligence Research. 4: 237–285. doi:10.1613/jair.301.
  10. ^ Ring M., Orseau L. (2011) Delusion, Survival, and Intelligent Agents. In: Schmidhuber J., Thórisson K.R., Looks M. (eds) Artificial General Intelligence. AGI 2011. Lecture Notes in Computer Science, vol 6830. Springer, Berlin, Heidelberg.
  11. ^ Yampolskiy, Roman; Fox, Joshua (24 August 2012). "Safety Engineering for Artificial General Intelligence". Topoi. doi:10.1007/s11245-012-9128-9.
  12. ^ Yampolskiy, Roman V. (2013). "What to Do with the Singularity Paradox?". Philosophy and Theory of Artificial Intelligence. Studies in Applied Philosophy, Epistemology and Rational Ethics. 5: 397–413. doi:10.1007/978-3-642-31674-6_30. ISBN 978-3-642-31673-9.
  13. ^ Omohundro, Stephen M. (February 2008). "The basic AI drives". Artificial General Intelligence 2008. Vol. 171. pp. 483–492. CiteSeerX 10.1.1.393.8356. ISBN 978-1-60750-309-5.
  14. ^ Seward, John P. (1956). "Drive, incentive, and reinforcement". Psychological Review. 63 (3): 195–203. doi:10.1037/h0048229. PMID 13323175.
  15. ^ Bostrom 2014, footnote 8 to chapter 7
  16. ^ Dewey, Daniel (2011). "Learning What to Value". Artificial General Intelligence. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer. pp. 309–314. doi:10.1007/978-3-642-22887-2_35. ISBN 978-3-642-22887-2.
  17. ^ Yudkowsky, Eliezer (2011). "Complex Value Systems in Friendly AI". Artificial General Intelligence. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer. pp. 388–393. doi:10.1007/978-3-642-22887-2_48. ISBN 978-3-642-22887-2.
  18. ^ Bostrom 2014, chapter 7, p. 110. "We humans often seem happy to let our final values drift... For example, somebody deciding to have a child might predict that they will come to value the child for its own sake, even though at the time of the decision they may not particularly value their future child... Humans are complicated, and many factors might be in play in a situation like this... one might have a final value that involves having certain experiences and occupying a certain social role; and become a parent— and undergoing the attendant goal shift— might be a necessary aspect of that..."
  19. ^ Schmidhuber, J. R. (2009). "Ultimate Cognition à la Gödel". Cognitive Computation. 1 (2): 177–193. CiteSeerX 10.1.1.218.3323. doi:10.1007/s12559-009-9014-y. S2CID 10784194.
  20. ^ a b Hibbard, B. (2012). "Model-based Utility Functions". Journal of Artificial General Intelligence. 3 (1): 1–24. arXiv:1111.3934. Bibcode:2012JAGI....3....1H. doi:10.2478/v10229-011-0013-5.
  21. ^ Hibbard, Bill (2014). "Ethical Artificial Intelligence". arXiv:1411.1373 [cs.AI].
  22. ^ a b c d Benson-Tilsen, Tsvi; Soares, Nate (March 2016). "Formalizing Convergent Instrumental Goals" (PDF). The Workshops of the Thirtieth AAAI Conference on Artificial Intelligence. Phoenix, Arizona. WS-16-02: AI, Ethics, and Society. ISBN 978-1-57735-759-9.
  23. ^ Yudkowsky, Eliezer (2008). "Artificial intelligence as a positive and negative factor in global risk". Global Catastrophic Risks. Vol. 303. p. 333. ISBN 9780199606504.
  24. ^ a b Shanahan, Murray (2015). "Chapter 7, Section 5: "Safe Superintelligence"". The Technological Singularity. MIT Press.
  25. ^ Bostrom 2014, Chapter 7, "Cognitive enhancement" subsection
  26. ^ Drexler, K. Eric (2019). Reframing Superintelligence: Comprehensive AI Services as General Intelligence (PDF) (Technical report). Future of Humanity Institute. #2019-1.
  27. ^ "Is Artificial Intelligence a Threat?". The Chronicle of Higher Education. 11 September 2014. Retrieved 25 November 2017.

References[]

Retrieved from ""