On Controllability of Artificial Intelligence

Abstract

Invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid pitfalls of such powerful technology it is important to be able to control it. However, possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established. In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI can’t be fully controlled. Consequences of uncontrollability of AI are discussed with respect to future of humanity and research on AI, and AI safety and security. This paper can serve as a comprehensive reference for the topic of uncontrollability.

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Similar books and articles

Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
Editorial: Risks of general artificial intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
Intelligence, Artificial and Otherwise.Paul Dumouchel - 2019 - Forum Philosophicum: International Journal for Philosophy 24 (2):241-258.
Risks of artificial intelligence.Vincent C. Müller (ed.) - 2016 - CRC Press - Chapman & Hall.

Analytics

Added to PP
2020-07-21

Downloads
2,830 (#2,565)

6 months
123 (#28,301)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Roman Yampolskiy
University of Louisville

References found in this work

No references found.

Add more references