How Values Shape the Machine Learning Opacity Problem

In Insa Lawler, Kareem Khalifa & Elay Shech (eds.), Scientific Understanding and Representation. Routledge. pp. 306-322 (2022)
  Copy   BIBTEX

Abstract

One of the main worries with machine learning model opacity is that we cannot know enough about how the model works to fully understand the decisions they make. But how much is model opacity really a problem? This chapter argues that the problem of machine learning model opacity is entangled with non-epistemic values. The chapter considers three different stages of the machine learning modeling process that corresponds to understanding phenomena: (i) model acceptance and linking the model to the phenomenon, (ii) explanation, and (iii) attributions of understanding. At each of these stages, non-epistemic values can, in part, determine how much machine learning model opacity poses a problem.

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
Transparent AI: reliabilist and proud.Abhishek Mishra - forthcoming - Journal of Medical Ethics.
AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.

Analytics

Added to PP
2022-11-29

Downloads
59 (#202,933)

6 months
59 (#21,335)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Emily Sullivan
Eindhoven University of Technology

Citations of this work

No citations found.

Add more citations