Why Digital Assistants Need Your Information to Support Your Autonomy

Philosophy and Technology 34 (4):1687-1705 (2021)
  Copy   BIBTEX

Abstract

This article investigates how human life is conceptualized in the design and use of digital assistants and how this conceptualization feeds back into the life really lived. It suggests that a specific way of conceptualizing human life — namely as a set of tasks to be optimized — is responsible for the much-criticized information hunger of these digital assistants. The data collection of digital assistants raises not just several issues of privacy, but also the potential for improving people’s degree of self-determination, because the optimization model of daily activity is genuinely suited to a certain mode of self-determination, namely the explicit and reflective setting, pursuing, and monitoring of goals. Furthermore, optimization systems’ need for generation and analysis of data overcomes one of the core weaknesses in human capacities for self-determination, namely problems with objective and quantitative self-assessment. It will be argued that critiques according to which digital assistants threaten to reduce their users’ autonomy tend to ignore that the risks to autonomy are derivative to potential gains in autonomy. These critiques are based on an overemphasis of a success conception of autonomy. Counter to this conception, being autonomous does not require a choice environment that exclusively supports a person’s “true” preferences, but the opportunity to engage with external influences, supportive as well as adverse. In conclusion, it will be argued that ethical evaluations of digital assistants should consider potential gains as well as potential risks for autonomy caused by the use of digital assistants.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,349

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Alexa Does Not Care. Should You?Olga Kudina - 2019 - Glimpse 20:107-115.
The history of digital ethics.Vincent C. Müller - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
Black Boxes and Bias in AI Challenge Autonomy.Craig M. Klugman - 2021 - American Journal of Bioethics 21 (7):33-35.
Digitalization and global ethics.Zonghao Bao & Kun Xiang - 2006 - Ethics and Information Technology 8 (1):41-47.
Autonomy, consent and the law.Sheila McLean - 2010 - New York, N.Y.: Routledge-Cavendish.
Post-mortem privacy and informational self-determination.J. C. Buitelaar - 2017 - Ethics and Information Technology 19 (2):129-142.
Qualities of sharing and their transformations in the digital age.Andreas Wittel - 2011 - International Review of Information Ethics 15 (9):2011.
Taming the Digital Behemoth.Oskar Gruenwald - 2020 - Journal of Interdisciplinary Studies 32 (1-2):1-16.

Analytics

Added to PP
2021-10-23

Downloads
24 (#639,942)

6 months
8 (#342,364)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Jan-Hendrik Heinrichs
Forschungszentrum Jülich

Citations of this work

No citations found.

Add more citations