Belief, Values, Bias, and Agency: Development of and Entanglement with "Artificial Intelligence"

Dissertation, Virginia Tech (2022)
  Copy   BIBTEX

Abstract

Contemporary research into the values, bias, and prejudices within “Artificial Intelligence” tends to operate in a crux of scholarship in computer science and engineering, sociology, philosophy, and science and technology studies (STS). Even so, getting the STEM fields to recognize and accept the importance of certain kinds of knowledge— the social, experiential kinds of knowledge— remains an ongoing struggle. Similarly, religious scholarship is still very often missing from these conversations because many in the STEM fields and the general public feel that religion and technoscientific investigations are and should be separate fields of inquiry. Here I demonstrate that experiential knowledge and religious, even occult beliefs are always already embedded within and crucial to understanding the sociotechnical imaginaries animating many technologies, particularly in the areas of “AI.” In fact, it is precisely the unwillingness of many to confront these facts which allow for both the problems of prejudice embedded in algorithmic systems, and for the hype-laden marketing of the corporations and agencies developing them. This same hype then intentionally obfuscates the actions of both the systems and the people who create them, while confounding and oppressing those most often made subject to them. Further, I highlight a crucial continuity between bigotry and systemic social projects (eugenics, transhumanism, and “supercrip” narratives), revealing their foundation in white supremacist colonialist myths of whose and which kinds of lives count as “truly human.” We will examine how these myths become embedded into the religious practices, technologies, and social frameworks in and out of which “AI” and algorithms are developed, employing a composite theoretical lens made from tools such as intersectionality, ritual theory, intersubjectivity, daemonology, postphenomenology, standpoint epistemology, and more. This theoretical apparatus recontextualizes our understanding of how mythologies and rituals of professionalization, disciplinarity, and dominant epistemological hierarchies animate concepts such as knowledge formation, expertise, and even what counts as knowledge. This recontextualization is then deployed to suggest remedies for research, public policy, and general paths forward in “AI.” By engaging in both the magico-religious valences and the lived experiential expertise of marginalized people, these systems can be better understood, and their harms anticipated and curtailed.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,386

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Knowledge-based systems.Klaus Mainzer - 1990 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 21 (1):47-74.
Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 122-137.
Science, technology, and society: a sociological approach.Wenda K. Bauchspies - 2006 - Malden, MA: Blackwell. Edited by Jennifer Croissant & Sal P. Restivo.

Analytics

Added to PP
2023-02-21

Downloads
26 (#596,950)

6 months
7 (#418,426)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Damien P. Williams
University of North Carolina, Charlotte

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references