Philosophy and Technology (4):1-30 (2021)

Authors
Christian List
Ludwig Maximilians Universität, München
Abstract
The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and a moral status? I will tentatively defend the (increasingly widely held) view that, under certain conditions, artificial intelligent systems, like corporate entities, might qualify as responsible moral agents and as holders of limited rights and legal personhood. I will further suggest that regulators should permit the use of autonomous artificial systems in high-stakes settings only if they are engineered to function as moral (not just intentional) agents and/or there is some liability-transfer arrangement in place. I will finally raise the possibility that if artificial systems ever became phenomenally conscious, there might be a case for extending a stronger moral status to them, but argue that, as of now, this remains very hypothetical.
Keywords Group agency, Artificial intelligence, Robots, Autonomous systems, Intentionality, Responsibility gaps, Responsibility, Rights, Moral and legal status, Corporate and AI legal personhood
Categories (categorize this paper)
ISBN(s)
DOI 10.1007/s13347-021-00454-7
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy

 PhilArchive page | Other versions
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

What is It Like to Be a Bat?Thomas Nagel - 1974 - Philosophical Review 83 (October):435-50.
Intention, Plans, and Practical Reason.Michael Bratman - 1987 - Cambridge: Cambridge, MA: Harvard University Press.

View all 71 references / Add more references

Citations of this work BETA

Group Responsibility.Christian List - forthcoming - In Dana Nelkin & Derk Pereboom (eds.), Oxford Handbook of Moral Responsibility. Oxford: Oxford University Press.
Tragic Choices and the Virtue of Techno-Responsibility Gaps.John Danaher - 2022 - Philosophy and Technology 35 (2):1-26.

Add more citations

Similar books and articles

On the Legal Responsibility of Autonomous Machines.Bartosz Brożek & Marek Jakubiec - 2017 - Artificial Intelligence and Law 25 (3):293-304.
Sustainability of Artificial Intelligence: Reconciling Human Rights with Legal Rights of Robots.Ammar Younas & Rehan Younas - forthcoming - In Zhyldyzbek Zhakshylykov & Aizhan Baibolot (eds.), Quality Time 18. Bishkek: International Alatoo University Kyrgyzstan. pp. 25-28.
Robots: Ethical by Design.Gordana Dodig Crnkovic & Baran Çürüklü - 2012 - Ethics and Information Technology 14 (1):61-71.
Instrumental Robots.Sebastian Köhler - 2020 - Science and Engineering Ethics 26 (6):3121-3141.
Artificial Agents Among Us: Should We Recognize Them as Agents Proper?Migle Laukyte - 2017 - Ethics and Information Technology 19 (1):1-17.

Analytics

Added to PP index
2021-08-04

Total views
229 ( #48,034 of 2,498,138 )

Recent downloads (6 months)
126 ( #5,220 of 2,498,138 )

How can I increase my downloads?

Downloads

My notes