Algorithms are not neutral: Bias in collaborative filtering

AI and Ethics 2 (4):763-770 (2022)
  Copy   BIBTEX

Abstract

When Artificial Intelligence (AI) is applied in decision-making that affects people’s lives, it is now well established that the outcomes can be biased or discriminatory. The question of whether algorithms themselves can be among the sources of bias has been the subject of recent debate among Artificial Intelligence researchers, and scholars who study the social impact of technology. There has been a tendency to focus on examples, where the data set used to train the AI is biased, and denial on the part of some researchers that algorithms can also be biased. Here we illustrate the point that algorithms themselves can be the source of bias with the example of collaborative filtering algorithms for recommendation and search. These algorithms are known to suffer from cold-start, popularity, and homogenizing biases, among others. While these are typically described as statistical biases rather than biases of moral import; in this paper we show that these statistical biases can lead directly to discriminatory outcomes. The intuitive idea is that data points on the margins of distributions of human data tend to correspond to marginalized people. The statistical biases described here have the effect of further marginalizing the already marginal. Biased algorithms for applications such as media recommendations can have significant impact on individuals’ and communities’ access to information and culturally-relevant resources. This source of bias warrants serious attention given the ubiquity of algorithmic decision-making.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,592

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
Bias in algorithmic filtering and personalization.Engin Bozdag - 2013 - Ethics and Information Technology 15 (3):209-227.
Bias Dilemma.Oisín Deery & Katherine Bailey - 2022 - Feminist Philosophy Quarterly 8 (3/4).
Dating through the filters.Karim Nader - 2020 - Social Philosophy and Policy 37 (2):237-248.
Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
Algorithmic Political Bias in Artificial Intelligence Systems.Uwe Peters - 2022 - Philosophy and Technology 35 (2):1-23.
(Some) algorithmic bias as institutional bias.Camila Hernandez Flowerman - 2023 - Ethics and Information Technology 25 (2):1-10.
Recommending Ourselves to Death: Values in the Age of Algorithms.Scott Robbins - 2023 - In Sergio Genovesi, Katharina Kaesling & Scott Robbins (eds.), Recommender Systems: Legal and Ethical Issues. Springer Verlag. pp. 147-161.

Analytics

Added to PP
2024-04-17

Downloads
0

6 months
0

Historical graph of downloads

Sorry, there are not enough data points to plot this chart.
How can I increase my downloads?

Author's Profile

Catherine Stinson
Queen's University

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references