AbstractThis study concerns the sociotechnical bases of human autonomy. Drawing on recent literature on AI ethics, philosophical literature on dimensions of autonomy, and on independent philosophical scrutiny, we first propose a multi-dimensional model of human autonomy and then discuss how AI systems can support or hinder human autonomy. What emerges is a philosophically motivated picture of autonomy and of the normative requirements personal autonomy poses in the context of algorithmic systems. Ranging from consent to data collection and processing, to computational tasks and interface design, to institutional and societal considerations, various aspects related to sociotechnical systems must be accounted for in order to get the full picture of potential effects of AI systems on human autonomy. It is clear how human agents can, for example, via coercion or manipulation, hinder each other’s autonomy, or how they can respect each other’s autonomy. AI systems can promote or hinder human autonomy, but can they literally respect or disrespect a person’s autonomy? We argue for a philosophical view according to which AI systems—while not moral agents or bearers of duties, and unable to literally respect or disrespect—are governed by so-called “ought-to-be norms.” This explains the normativity at stake with AI systems. The responsible people (designers, users, etc.) have duties and ought-to-do norms, which correspond to these ought-to-be norms.
Similar books and articles
Embedding Values in Artificial Intelligence (AI) Systems.Ibo van de Poel - 2020 - Minds and Machines 30 (3):385-409.
Introduction: The Limits of Respect for Autonomy.David G. Kirchhoffer - 2019 - In D. Kirchhoffer & B. Richards (eds.), Beyond Autonomy: Limits and Alternatives to Informed Consent in Research Ethics and Law. Cambridge:
Introduction: The Limits of Respect for Autonomy.David Kirchhoffer - 2019 - In David G. Kirchhoffer & Bernadette J. Richards (eds.), Beyond Autonomy: Limits and Alternatives to Informed Consent in Research Ethics and Law. Cambridge: pp. 1-14.
Dignity, Being and Becoming in Research Ethics.David G. Kirchhoffer - 2019 - In D. Kirchhoffer & B. Richards (eds.), Beyond Autonomy: Limits and Alternatives to Informed Consent in Research Ethics and Law. Cambridge:
An Intercultural Nursing Perspective on Autonomy.Ingrid Hanssen - 2004 - Nursing Ethics 11 (1):28-41.
Whose life is it anyway? A study in respect for autonomy.M. Norden - 1995 - Journal of Medical Ethics 21 (3):179-183.
Robot Autonomy vs. Human Autonomy: Social Robots, Artificial Intelligence (AI), and the Nature of Autonomy.Paul Formosa - 2021 - Minds and Machines 31 (4):595-616.
Respect for Autonomy: Its Demands and Limits in Biobanking. [REVIEW]Iain Law - 2011 - Health Care Analysis 19 (3):259-268.
Authenticity and autonomy in deep-brain stimulation.Alistair Wardrope - 2014 - Journal of Medical Ethics 40 (8):563-566.
Establishing the rules for building trustworthy AI.Luciano Floridi - 2019 - Nature Machine Intelligence 1:261-262.
Developing creativity: Artificial barriers in artificial intelligence. [REVIEW]Kyle E. Jennings - 2010 - Minds and Machines 20 (4):489-501.
Beyond Autonomy: Limits and Alternatives to Informed Consent in Research Ethics and Law.David G. Kirchhoffer & Bernadette Richards (eds.) - 2019 - Cambridge: Cambridge University Press.
Robots, Dennett and the autonomous: A terminological investigation. [REVIEW]C. T. A. Schmidt & Felicitas Kraemer - 2006 - Minds and Machines 16 (1):73-80.
Added to PP
Historical graph of downloads
References found in this work
AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations.Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke & Effy Vayena - 2018 - Minds and Machines 28 (4):689-707.