Catastrophically Dangerous AI is Possible Before 2030

Abstract

In AI safety research, the median timing of AGI arrival is often taken as a reference point, which various polls predict to happen in the middle of 21 century, but for maximum safety, we should determine the earliest possible time of Dangerous AI arrival. Such Dangerous AI could be either AGI, capable of acting completely independently in the real world and of winning in most real-world conflicts with humans, or an AI helping humans to build weapons of mass destruction, or a national state coupled with AI-based government system. In this article, I demonstrate that the earliest timing of Dangerous AI, corresponding to 10 per cent of its arrival probability, is before 2030. Several partly independent sources of information are in accordance: 1. The growth of the hardware available for AI research makes human-brain-equivalents of compute available for AI research in the 2020s. It is fuelled by specialized AI-chips, the use of many chips in one processing unit, and the larger research budgets, among other things. 2. The neural network performance and other characteristics, like the number of parameters, is quickly increasing every year, and extrapolating this tendency suggests that roughly human-level performance in a few years, around 2025. 3. Expert polls show around 10 per cent of the probability of an early appearance of artificial general intelligence (AGI) in the next decade, that is, before 2030. 4. Hyperbolic growth in different big history models converges around 2025-2030 (the technological singularity). 5. Anthropic arguments (similar to the Doomsday argument) suggest that qualified observers are more likely to appear near the end of the AI research epoch, as the number of such observers grew exponentially. This number doubles every 5-10 years, and thus we are likely to find ourselves around a decade before the end of AI research, which will happen consequently around 2030.

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Similar books and articles

Risks of artificial intelligence.Vincent C. Müller (ed.) - 2016 - CRC Press - Chapman & Hall.
Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
Editorial: Risks of general artificial intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
How AI can be surprisingly dangerous for the philosophy of mathematics— and of science.Walter Carnielli - 2021 - Circumscribere: International Journal for the History of Science 27:1-12.

Analytics

Added to PP
2021-11-01

Downloads
318 (#63,418)

6 months
112 (#37,517)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references