Catching Treacherous Turn: A Model of the Multilevel AI Boxing

Abstract

With the fast pace of AI development, the problem of preventing its global catastrophic risks arises. However, no satisfactory solution has been found. From several possibilities, the confinement of AI in a box is considered as a low-quality possible solution for AI safety. However, some treacherous AIs can be stopped by effective confinement if it is used as an additional measure. Here, we proposed an idealized model of the best possible confinement by aggregating all known ideas in the field of AI boxing. We model the confinement based on the principles used in the safety engineering of nuclear power plants. We show that AI confinement should be implemented in several levels of defense. These levels include 1) AI design in fail-safe manner 2) limiting its capabilities, preventing self-improving and circuit breakers on treacherous turn 3) isolation from the outside world and, lastly, as the final hope 4) outside measures oriented on stopping AI in the wild. We demonstrate that the substantial number (more than 50 ideas listed in the article) of mutually independent measures could provide a relatively high probability of the containment of a human-level AI but may be not sufficient to preserve runaway of superintelligent AI. Thus, these measures will work only if they are used to prevent superintelligent AI creation, but not for containing superintelligence. We suggest that there could be a safe operation threshold, on which AI is useful, but is not able to hack containment system from the inside, the same way as a safe level of chain reaction inside nuclear power plants is maintained. However, overall, a failure of the confinement is inevitable, so we need to use the full AGI limited number of times (AI-ticks).

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Similar books and articles

Liberty and Valuing Sentient Life.John Hadley - 2013 - Ethics and the Environment 18 (1):87-103.
Leakproofing the Singularity.Roman V. Yampolskiy - 2012 - Journal of Consciousness Studies 19 (1-2):194-214.
Animal confinement and use.Robert Streiffer & David Killoren - 2019 - Canadian Journal of Philosophy 49 (1):1-21.
Leakproofing the Singularity Artificial Intelligence Confinement Problem.Roman Yampolskiy - 2012 - Journal of Consciousness Studies 19 (1-2):194-214.
Liberty, beneficence, and involuntary confinement.Joan C. Callahan - 1984 - Journal of Medicine and Philosophy 9 (3):261-294.
Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.

Analytics

Added to PP
2021-06-21

Downloads
502 (#36,901)

6 months
136 (#27,169)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Citations of this work

No citations found.

Add more citations