Artificial Intelligence Systems, Responsibility and Agential Self-Awareness
In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin, Germany: pp. 15-25 (2022)
Abstract
This paper investigates the claim that artificial Intelligence Systems cannot be held morally responsible because they do not have an ability for agential self-awareness e.g. they cannot be aware that they are the agents of an action. The main suggestion is that if agential self-awareness and related first person representations presuppose an awareness of a self, the possibility of responsible artificial intelligence systems cannot be evaluated independently of research conducted on the nature of the self. Focusing on a specific account of the self from the phenomenological tradition, this paper suggests that a minimal necessary condition that artificial intelligence systems must satisfy so that they have a capability for self-awareness, is having a minimal self defined as ‘a sense of ownership’. As this sense of ownership is usually associated with having a living body, one suggestion is that artificial intelligence systems must have similar living bodies so they can have a sense of self. Discussing cases of robotic animals as examples of the possibility of artificial intelligence systems having a sense of self, the paper concludes that the possibility of artificial intelligence systems having a ‘sense of ownership’ or a sense of self may be a necessary condition for having responsibility.Author's Profile
My notes
Similar books and articles
Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
Agential Obligation as Non-Agential Personal Obligation plus Agency.Paul McNamara - 2004 - Journal of Applied Logic 2 (1):117-152.
Towards a Middle-Ground Theory of Agency for Artificial Intelligence.Louis Longin - 2020 - In M. Nørskov, J. Seibt & O. Quick (eds.), Culturally Sustainable Social Robotics: Proceedings of Robophilosophy 2020. Amsterdam, Netherlands: pp. 17-26.
From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
Human Goals Are Constitutive of Agency in Artificial Intelligence.Elena Popa - 2021 - Philosophy and Technology 34 (4):1731-1750.
Explaining (away) the epistemic condition on moral responsibility.Gunnar Björnsson - 2017 - In Philip Robichaud & Jan Willem Wieland (eds.), Responsibility - The Epistemic Condition. Oxford University Press. pp. 146–162.
Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - forthcoming - In Silja Vöneky, Philipp Kellmeyer, Oliver Müller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
When Doctors and AI Interact: on Human Responsibility for Artificial Risks.Mario Verdicchio & Andrea Perin - 2022 - Philosophy and Technology 35 (1):1-28.
Artificial intelligence—A personal view.David Marr - 1977 - Artificial Intelligence 9 (September):37-48.
Corporations and Non‐Agential Moral Responsibility.James Dempsey - 2013 - Journal of Applied Philosophy 30 (4):334-350.
Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
Analytics
Added to PP
2022-11-20
Downloads
0
6 months
0
2022-11-20
Downloads
0
6 months
0
Historical graph of downloads