Computers in control: Rational transfer of authority or irresponsible abdication of autonomy? [Book Review]

Ethics and Information Technology 1 (3):173-184 (1999)
  Copy   BIBTEX

Abstract

To what extent should humans transfer, or abdicate, responsibility to computers? In this paper, I distinguish six different senses of responsible and then consider in which of these senses computers can, and in which they cannot, be said to be responsible for deciding various outcomes. I sort out and explore two different kinds of complaint against putting computers in greater control of our lives: (i) as finite and fallible human beings, there is a limit to how far we can acheive increased reliability through complex devices of our own design; (ii) even when computers are more reliable than humans, certain tasks (e.g., selecting an appropriate gift for a friend, solving the daily crossword puzzle) are inappropriately performed by anyone (or anything) other than oneself. In critically evaluating these claims, I arrive at three main conclusions: (1) While we ought to correct for many of our shortcomings by availing ourselves of the computer''s larger memory, faster processing speed and greater stamina, we are limited by our own finiteness and fallibility (rather than by whatever limitations may be inherent in silicon and metal) in the ability to transcend our own unreliability. Moreover, if we rely on programmed computers to such an extent that we lose touch with the human experience and insight that formed the basis for their programming design, our fallibility is magnified rather than mitigated. (2) Autonomous moral agents can reasonably defer to greater expertise, whether human or cybernetic. But they cannot reasonably relinquish background-oversight responsibility. They must be prepared, at least periodically, to review whether the expertise to which they defer is indeed functioning as he/she/it was authorized to do, and to take steps to revoke that authority, if necessary. (3) Though outcomes matter, it can also matter how they are brought about, and by whom. Thus, reflecting on how much of our lives should be directed and implemented by computer may be another way of testing any thoroughly end-state or consequentialist conception of the good and decent life. To live with meaning and purpose, we need to actively engage our own faculties and empathetically connect up with, and resonate to, others. Thus there is some limit to how much of life can be appropriately lived by anyone (or anything) other than ourselves.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,349

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2009-01-28

Downloads
62 (#254,324)

6 months
8 (#352,434)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

Killer robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.
Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
Computing and moral responsibility.Merel Noorman - forthcoming - Stanford Encyclopedia of Philosophy.

View all 11 citations / Add more citations

References found in this work

No references found.

Add more references