Two arguments against human-friendly AI

AI and Ethics 1 (1):435-444 (2021)
  Copy   BIBTEX

Abstract

The past few decades have seen a substantial increase in the focus on the myriad ethical implications of artificial intelligence. Included amongst the numerous issues is the existential risk that some believe could arise from the development of artificial general intelligence (AGI) which is an as-of-yet hypothetical form of AI that is able to perform all the same intellectual feats as humans. This has led to extensive research into how humans can avoid losing control of an AI that is at least as intelligent as the best of us. This ‘control problem’ has given rise to research into the development of ‘friendly AI’ which is a highly competent AGI that will benefit, or at the very least, not be hostile toward humans. Though my question is focused upon AI, ethics and issues surrounding the value of friendliness, I want to question the pursuit of human-friendly AI (hereafter FAI). In other words, we might ask whether worries regarding harm to humans are sufficient reason to develop FAI rather than impartially ethical AGI, or an AGI designed to take the interests of all moral patients—both human and non-human—into consideration. I argue that, given that we are capable of developing AGI, it ought to be developed with impartial, species-neutral values rather than those prioritizing friendliness to humans above all else.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,745

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2021-05-15

Downloads
100 (#54,595)

6 months
20 (#753,917)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Ken Daley
University of Colorado, Boulder (PhD)

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references