First human upload as AI Nanny

Abstract

Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here the ways to create the safest and simplest form of AI, which may work as AI Nanny. Such AI system will be enough to solve most problems, which we expect the AI will solve, including control of robotics, acceleration of the medical research, but will present less risk, as it will be less different from humans. As AI police, it will work as operation system for most computers, producing world surveillance system, which will be able to envision and stop any potential terrorists and bad actors in advance. As uploading technology is lagging, and neuromorphic AI is intrinsically dangerous, the most plausible way to human-based AI Nanny is either functional model of the human mind or a Narrow-AI empowered group of people.

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Similar books and articles

Themes and variations in development: Can nanny-bots act like human caregivers?Jean Mercer - 2010 - Interaction Studiesinteraction Studies Social Behaviour and Communication in Biological and Artificial Systems 11 (2):233-237.
Can you kill a robot nanny?: Ethological approach to the effect of robot caregivers on child development and human evolution.Enikő Kubinyi, P. Pongrácz & Ádám Miklósi - 2010 - Interaction Studiesinteraction Studies Social Behaviour and Communication in Biological and Artificial Systems 11 (2):214-219.
Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
Risks of artificial intelligence.Vincent C. Müller (ed.) - 2016 - CRC Press - Chapman & Hall.
Editorial: Risks of general artificial intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).

Analytics

Added to PP
2019-02-19

Downloads
495 (#36,467)

6 months
106 (#36,560)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references