Switch to: References

Add citations

You must login to add citations.
  1. Acting Together to Address Structural Injustice: A Deliberative Mini-Public Proposal.Ting-an Lin - forthcoming - In Kevin Walton, Sadurski Wojciech & Coel Kirkby (eds.), Responding to Injustice. Routledge.
    Structural injustice exists when the influence of social structure exposes some groups of people to undeserved burdens while conferring unearned power to others. It has been argued that the responsibility for addressing structural injustices should be shared among those participating in the social structure and can only be discharged through collective action; however, the proper form of collective action does not happen easily. To address structural injustice effectively, we need to gain clarity on the practical challenges that are involved and (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • On Hedden's proof that machine learning fairness metrics are flawed.Anders Søgaard, Klemens Kappel & Thor Grünbaum - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    1. Fairness is about the just distribution of society's resources, and in ML, the main resource being distributed is model performance, e.g. the translation quality produced by machine translation...
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • On the Site of Predictive Justice.Seth Lazar & Jake Stone - forthcoming - Noûs.
    Optimism about our ability to enhance societal decision‐making by leaning on Machine Learning (ML) for cheap, accurate predictions has palled in recent years, as these ‘cheap’ predictions have come at significant social cost, contributing to systematic harms suffered by already disadvantaged populations. But what precisely goes wrong when ML goes wrong? We argue that, as well as more obvious concerns about the downstream effects of ML‐based decision‐making, there can be moral grounds for the criticism of these predictions themselves. We introduce (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Dirty data labeled dirt cheap: epistemic injustice in machine learning systems.Gordon Hull - 2023 - Ethics and Information Technology 25 (3):1-14.
    Artificial intelligence (AI) and machine learning (ML) systems increasingly purport to deliver knowledge about people and the world. Unfortunately, they also seem to frequently present results that repeat or magnify biased treatment of racial and other vulnerable minorities. This paper proposes that at least some of the problems with AI’s treatment of minorities can be captured by the concept of epistemic injustice. To substantiate this claim, I argue that (1) pretrial detention and physiognomic AI systems commit testimonial injustice because their (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • (Some) algorithmic bias as institutional bias.Camila Hernandez Flowerman - 2023 - Ethics and Information Technology 25 (2):1-10.
    In this paper I argue that some examples of what we label ‘algorithmic bias’ would be better understood as cases of institutional bias. Even when individual algorithms appear unobjectionable, they may produce biased outcomes given the way that they are embedded in the background structure of our social world. Therefore, the problematic outcomes associated with the use of algorithmic systems cannot be understood or accounted for without a kind of structural account. Understanding algorithmic bias as institutional bias in particular (as (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation