Interdependence as the key for an ethical artificial autonomy

AI and Society:1-15 (forthcoming)
  Copy   BIBTEX

Abstract

Currently, the autonomy of artificial systems, robotic systems in particular, is certainly one of the most debated issues, both from the perspective of technological development and its social impact and ethical repercussions. While theoretical considerations often focus on scenarios far beyond what can be concretely hypothesized from the current state of the art, the term autonomy is still used in a vague or too general way. This reduces the possibilities of a punctual analysis of such an important issue, thus leading to often polarized positions. The intent of this paper is to clarify what is meant by artificial autonomy, and what are the prerequisites that can allow the attribution of this characteristic to a robotic system. Starting from some concrete examples, we will try to indicate a way towards artificial autonomy that can hold together the advantages of developing adaptive and versatile systems with the management of the inevitable problems that this technology poses both from the viewpoint of safety and ethics. Our proposal is that a real artificial autonomy, especially if expressed in the social context, can only be achieved through interdependence with other social actors, through continuous exchanges and interactions which, while allowing robots to explore the environment, guarantee the emergence of shared practices, behaviors, and ethical principles, which otherwise could not be imposed with a top-down approach, if not at the price of giving up the same artificial autonomy.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 90,593

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 122-137.
Establishing the rules for building trustworthy AI.Luciano Floridi - 2019 - Nature Machine Intelligence 1:261-262.
AI Systems and Respect for Human Autonomy.Arto Laitinen & Otto Sahlgren - 2021 - Frontiers in Artificial Intelligence.
Rethinking autonomy.Richard Alterman - 2000 - Minds and Machines 10 (1):15-30.
Changes of representational AI concepts induced by embodied autonomy.Erich Prem - 2000 - Communication and Cognition-Artificial Intelligence 17 (3-4):189-208.
Ethical Machines?Ariela Tubert - 2018 - Seattle University Law Review 41 (4).

Analytics

Added to PP
2022-01-10

Downloads
38 (#365,484)

6 months
10 (#135,615)

Historical graph of downloads
How can I increase my downloads?

References found in this work

Groundwork for the metaphysics of morals.Immanuel Kant - 1785 - New York: Oxford University Press. Edited by Thomas E. Hill & Arnulf Zweig.
Superintelligence: paths, dangers, strategies.Nick Bostrom (ed.) - 2014 - Oxford University Press.
Intelligence without representation.Rodney A. Brooks - 1991 - Artificial Intelligence 47 (1--3):139-159.
Groundwork of the metaphysics of morals.Immanuel Kant - 1785 - In Elizabeth Schmidt Radcliffe, Richard McCarty, Fritz Allhoff & Anand Vaidya (eds.), Late Modern Philosophy: Essential Readings with Commentary. Blackwell.
Human nature and the limits of science.John Dupré - 2001 - New York: Oxford University Press.

View all 26 references / Add more references