Artificial Intelligence Systems, Responsibility and Agential Self-Awareness

In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin, Germany: pp. 15-25 (2022)
  Copy   BIBTEX

Abstract

This paper investigates the claim that artificial Intelligence Systems cannot be held morally responsible because they do not have an ability for agential self-awareness e.g. they cannot be aware that they are the agents of an action. The main suggestion is that if agential self-awareness and related first person representations presuppose an awareness of a self, the possibility of responsible artificial intelligence systems cannot be evaluated independently of research conducted on the nature of the self. Focusing on a specific account of the self from the phenomenological tradition, this paper suggests that a minimal necessary condition that artificial intelligence systems must satisfy so that they have a capability for self-awareness, is having a minimal self defined as ‘a sense of ownership’. As this sense of ownership is usually associated with having a living body, one suggestion is that artificial intelligence systems must have similar living bodies so they can have a sense of self. Discussing cases of robotic animals as examples of the possibility of artificial intelligence systems having a sense of self, the paper concludes that the possibility of artificial intelligence systems having a ‘sense of ownership’ or a sense of self may be a necessary condition for having responsibility.

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
Human Goals Are Constitutive of Agency in Artificial Intelligence.Elena Popa - 2021 - Philosophy and Technology 34 (4):1731-1750.
Explaining (away) the epistemic condition on moral responsibility.Gunnar Björnsson - 2017 - In Philip Robichaud & Jan Willem Wieland (eds.), Responsibility - The Epistemic Condition. Oxford University Press. pp. 146–162.
Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - forthcoming - In Silja Vöneky, Philipp Kellmeyer, Oliver Müller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
Artificial intelligence—A personal view.David Marr - 1977 - Artificial Intelligence 9 (September):37-48.
Corporations and Non‐Agential Moral Responsibility.James Dempsey - 2013 - Journal of Applied Philosophy 30 (4):334-350.
Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.

Analytics

Added to PP
2022-11-20

Downloads
0

6 months
0

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Lydia Farina
Nottingham University

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references