Interacting with Machines: Can an Artificially Intelligent Agent Be a Partner?

Philosophy and Technology 36 (3):1-32 (2023)
  Copy   BIBTEX

Abstract

In the past decade, the fields of machine learning and artificial intelligence (AI) have seen unprecedented developments that raise human-machine interactions (HMI) to the next level.Smart machines, i.e., machines endowed with artificially intelligent systems, have lost their character as mere instruments. This, at least, seems to be the case if one considers how humans experience their interactions with them. Smart machines are construed to serve complex functions involving increasing degrees of freedom, and they generate solutions not fully anticipated by humans. Consequently, their performances show a touch of action and even autonomy. HMI is therefore often described as a sort of “cooperation” rather than as a mere application of a tool. Some authors even go as far as subsuming cooperation with smart machines under the label ofpartnership, akin to cooperation between human agents sharing a common goal. In this paper, we explore how far the notion of shared agency and partnership can take us in our understanding of human interaction with smart machines. Discussing different topoi related to partnerships in general, we suggest that different kinds of “partnership” depending on the form of interaction between agents need to be kept apart. Building upon these discussions, we propose a tentative taxonomy of different kinds of HMI distinguishing coordination, collaboration, cooperation, and social partnership.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,682

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Machines and the Moral Community.Erica L. Neely - 2013 - Philosophy and Technology 27 (1):97-111.
WALL·E and EVE.Timothy Brown - 2019-10-03 - In Richard B. Davis (ed.), Disney and Philosophy. Wiley. pp. 129–136.
Nano-enabled AI.J. Storrs Hall - 2006 - International Journal of Applied Philosophy 20 (2):247-261.
Nano-enabled AI.J. Storrs Hall - 2006 - International Journal of Applied Philosophy 20 (2):247-261.
A Framework for Grounding the Moral Status of Intelligent Machines.Michael Scheessele - 2018 - AIES '18, February 2–3, 2018, New Orleans, LA, USA.

Analytics

Added to PP
2023-08-26

Downloads
17 (#887,445)

6 months
9 (#347,496)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Sophie Loidolt
Technical University of Darmstadt
Philipp Schmidt
Julius-Maximilians-Universität, Würzburg

References found in this work

What is it like to be a bat?Thomas Nagel - 1974 - Philosophical Review 83 (October):435-50.
What is it like to be a bat?Thomas Nagel - 1979 - In Mortal questions. New York: Cambridge University Press. pp. 435 - 450.
Trust and antitrust.Annette Baier - 1986 - Ethics 96 (2):231-260.
Participatory sense-making: An enactive approach to social cognition.Hanne De Jaegher & Ezequiel Di Paolo - 2007 - Phenomenology and the Cognitive Sciences 6 (4):485-507.
On Social Facts.Margaret Gilbert - 1989 - Ethics 102 (4):853-856.

View all 38 references / Add more references