Results for 'Fairness in Machine Learning'

976 found
Order:
  1. Fairness in Machine Learning: Against False Positive Rate Equality as a Measure of Fairness.Robert Long - 2021 - Journal of Moral Philosophy 19 (1):49-78.
    As machine learning informs increasingly consequential decisions, different metrics have been proposed for measuring algorithmic bias or unfairness. Two popular “fairness measures” are calibration and equality of false positive rate. Each measure seems intuitively important, but notably, it is usually impossible to satisfy both measures. For this reason, a large literature in machine learning speaks of a “fairness tradeoff” between these two measures. This framing assumes that both measures are, in fact, capturing something important. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  2.  2
    Towards Transnational Fairness in Machine Learning: A Case Study in Disaster Response Systems.Cem Kozcuer, Anne Mollen & Felix Bießmann - 2024 - Minds and Machines 34 (2):1-26.
    Research on fairness in machine learning (ML) has been largely focusing on individual and group fairness. With the adoption of ML-based technologies as assistive technology in complex societal transformations or crisis situations on a global scale these existing definitions fail to account for algorithmic fairness transnationally. We propose to complement existing perspectives on algorithmic fairness with a notion of transnational algorithmic fairness and take first steps towards an analytical framework. We exemplify the relevance (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  3.  53
    Do the Ends Justify the Means? Variation in the Distributive and Procedural Fairness of Machine Learning Algorithms.Lily Morse, Mike Horia M. Teodorescu, Yazeed Awwad & Gerald C. Kane - 2021 - Journal of Business Ethics 181 (4):1083-1095.
    Recent advances in machine learning methods have created opportunities to eliminate unfairness from algorithmic decision making. Multiple computational techniques (i.e., algorithmic fairness criteria) have arisen out of this work. Yet, urgent questions remain about the perceived fairness of these criteria and in which situations organizations should use them. In this paper, we seek to gain insight into these questions by exploring fairness perceptions of five algorithmic criteria. We focus on two key dimensions of fairness (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  4.  61
    Enabling Fairness in Healthcare Through Machine Learning.Geoff Keeling & Thomas Grote - 2022 - Ethics and Information Technology 24 (3):1-13.
    The use of machine learning systems for decision-support in healthcare may exacerbate health inequalities. However, recent work suggests that algorithms trained on sufficiently diverse datasets could in principle combat health inequalities. One concern about these algorithms is that their performance for patients in traditionally disadvantaged groups exceeds their performance for patients in traditionally advantaged groups. This renders the algorithmic decisions unfair relative to the standard fairness metrics in machine learning. In this paper, we defend the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  5.  15
    Melting contestation: insurance fairness and machine learning.Laurence Barry & Arthur Charpentier - 2023 - Ethics and Information Technology 25 (4):1-13.
    With their intensive use of data to classify and price risk, insurers have often been confronted with data-related issues of fairness and discrimination. This paper provides a comparative review of discrimination issues raised by traditional statistics versus machine learning in the context of insurance. We first examine historical contestations of insurance classification, showing that it was organized along three types of bias: pure stereotypes, non-causal correlations, or causal effects that a society chooses to protect against, are thus (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  47
    Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data.Reuben Binns & Michael Veale - 2017 - Big Data and Society 4 (2):205395171774353.
    Decisions based on algorithmic, machine learning models can be unfair, reproducing biases in historical data used to train them. While computational techniques are emerging to address aspects of these concerns through communities such as discrimination-aware data mining and fairness, accountability and transparency machine learning, their practical implementation faces real-world challenges. For legal, institutional or commercial reasons, organisations might not hold the data on sensitive attributes such as gender, ethnicity, sexuality or disability needed to diagnose and (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  7. Fair machine learning under partial compliance.Jessica Dai, Sina Fazelpour & Zachary Lipton - 2021 - In Jessica Dai, Sina Fazelpour & Zachary Lipton (eds.), Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. pp. 55–65.
    Typically, fair machine learning research focuses on a single decision maker and assumes that the underlying population is stationary. However, many of the critical domains motivating this work are characterized by competitive marketplaces with many decision makers. Realistically, we might expect only a subset of them to adopt any non-compulsory fairness-conscious policy, a situation that political philosophers call partial compliance. This possibility raises important questions: how does partial compliance and the consequent strategic behavior of decision subjects affect (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  14
    SAF: Stakeholders’ Agreement on Fairness in the Practice of Machine Learning Development.Georgina Curto & Flavio Comim - 2023 - Science and Engineering Ethics 29 (4):1-19.
    This paper clarifies why bias cannot be completely mitigated in Machine Learning (ML) and proposes an end-to-end methodology to translate the ethical principle of justice and fairness into the practice of ML development as an ongoing agreement with stakeholders. The pro-ethical iterative process presented in the paper aims to challenge asymmetric power dynamics in the fairness decision making within ML design and support ML development teams to identify, mitigate and monitor bias at each step of ML (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  9.  24
    Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10. On Predicting Recidivism: Epistemic Risk, Tradeoffs, and Values in Machine Learning.Justin B. Biddle - 2022 - Canadian Journal of Philosophy 52 (3):321-341.
    Recent scholarship in philosophy of science and technology has shown that scientific and technological decision making are laden with values, including values of a social, political, and/or ethical character. This paper examines the role of value judgments in the design of machine-learning systems generally and in recidivism-prediction algorithms specifically. Drawing on work on inductive and epistemic risk, the paper argues that ML systems are value laden in ways similar to human decision making, because the development and design of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  11. Machine learning based privacy-preserving fair data trading in big data market.Y. Zhao, Y. Yu, Y. Li, G. Han & X. Du - 2019 - Information Sciences 478.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  74
    Dirty data labeled dirt cheap: epistemic injustice in machine learning systems.Gordon Hull - 2023 - Ethics and Information Technology 25 (3):1-14.
    Artificial intelligence (AI) and machine learning (ML) systems increasingly purport to deliver knowledge about people and the world. Unfortunately, they also seem to frequently present results that repeat or magnify biased treatment of racial and other vulnerable minorities. This paper proposes that at least some of the problems with AI’s treatment of minorities can be captured by the concept of epistemic injustice. To substantiate this claim, I argue that (1) pretrial detention and physiognomic AI systems commit testimonial injustice (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  13.  13
    Citizens’ data afterlives: Practices of dataset inclusion in machine learning for public welfare.Helene Friis Ratner & Nanna Bonde Thylstrup - forthcoming - AI and Society:1-11.
    Public sector adoption of AI techniques in welfare systems recasts historic national data as resource for machine learning. In this paper, we examine how the use of register data for development of predictive models produces new ‘afterlives’ for citizen data. First, we document a Danish research project’s practical efforts to develop an algorithmic decision-support model for social workers to classify children’s risk of maltreatment. Second, we outline the tensions emerging from project members’ negotiations about which datasets to include. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14.  99
    Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms.Benedetta Giovanola & Simona Tiribelli - 2023 - AI and Society 38 (2):549-563.
    The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  15. Egalitarian Machine Learning.Clinton Castro, David O’Brien & Ben Schwan - 2023 - Res Publica 29 (2):237–264.
    Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take ‘fairness’ in this context to be a placeholder for (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  16.  22
    Non-empirical problems in fair machine learning.Teresa Scantamburlo - 2021 - Ethics and Information Technology 23 (4):703-712.
    The problem of fair machine learning has drawn much attention over the last few years and the bulk of offered solutions are, in principle, empirical. However, algorithmic fairness also raises important conceptual issues that would fail to be addressed if one relies entirely on empirical considerations. Herein, I will argue that the current debate has developed an empirical framework that has brought important contributions to the development of algorithmic decision-making, such as new techniques to discover and prevent (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  17.  76
    Machine learning’s limitations in avoiding automation of bias.Daniel Varona, Yadira Lizama-Mue & Juan Luis Suárez - 2021 - AI and Society 36 (1):197-203.
    The use of predictive systems has become wider with the development of related computational methods, and the evolution of the sciences in which these methods are applied Solon and Selbst and Pedreschi et al.. The referred methods include machine learning techniques, face and/or voice recognition, temperature mapping, and other, within the artificial intelligence domain. These techniques are being applied to solve problems in socially and politically sensitive areas such as crime prevention and justice management, crowd management, and emotion (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  19.  35
    Diversity in sociotechnical machine learning systems.Maria De-Arteaga & Sina Fazelpour - 2022 - Big Data and Society 9 (1).
    There has been a surge of recent interest in sociocultural diversity in machine learning research. Currently, however, there is a gap between discussions of measures and benefits of diversity in machine learning, on the one hand, and the broader research on the underlying concepts of diversity and the precise mechanisms of its functional benefits, on the other. This gap is problematic because diversity is not a monolithic concept. Rather, different concepts of diversity are based on distinct (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  20.  36
    On Hedden's proof that machine learning fairness metrics are flawed.Anders Søgaard, Klemens Kappel & Thor Grünbaum - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    1. Fairness is about the just distribution of society's resources, and in ML, the main resource being distributed is model performance, e.g. the translation quality produced by machine translation...
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21.  11
    Practicing trustworthy machine learning: consistent, transparent, and fair AI pipelines.Yada Pruksachatkun - 2022 - Boston: O'Reilly. Edited by Matthew McAteer & Subhabrata Majumdar.
    With the increasing use of AI in high-stakes domains such as medicine, law, and defense, organizations spend a lot of time and money to make ML models trustworthy. Many books on the subject offer deep dives into theories and concepts. This guide provides a practical starting point to help development teams produce models that are secure, more robust, less biased, and more explainable. Authors Yada Pruksachatkun, Matthew McAteer, and Subhabrata Majumdar translate best practices in the academic literature for curating datasets (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  22. The Use and Misuse of Counterfactuals in Ethical Machine Learning.Atoosa Kasirzadeh & Andrew Smart - 2021 - In Atoosa Kasirzadeh & Andrew Smart (eds.), ACM Conference on Fairness, Accountability, and Transparency (FAccT 21).
    The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine learning (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  23.  17
    Bridging the AI Chasm: Can EBM Address Representation and Fairness in Clinical Machine Learning?Nicole Martinez-Martin & Mildred K. Cho - 2022 - American Journal of Bioethics 22 (5):30-32.
    McCradden et al. propose to close the “AI chasm” between algorithms and clinically meaningful application using the norms of evidence-based medicine and clinical research, with the rat...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  24. Big Data Analytics in Healthcare: Exploring the Role of Machine Learning in Predicting Patient Outcomes and Improving Healthcare Delivery.Federico Del Giorgio Solfa & Fernando Rogelio Simonato - 2023 - International Journal of Computations Information and Manufacturing (Ijcim) 3 (1):1-9.
    Healthcare professionals decide wisely about personalized medicine, treatment plans, and resource allocation by utilizing big data analytics and machine learning. To guarantee that algorithmic recommendations are impartial and fair, however, ethical issues relating to prejudice and data privacy must be taken into account. Big data analytics and machine learning have a great potential to disrupt healthcare, and as these technologies continue to evolve, new opportunities to reform healthcare and enhance patient outcomes may arise. In order to (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  25. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  26. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and (...) in healthcare. In this paper, we provide the building blocks for an account of algorithmic bias and its normative relevance in medicine. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  27.  11
    Fairness Hacking: The Malicious Practice of Shrouding Unfairness in Algorithms.Kristof Meding & Thilo Hagendorff - 2024 - Philosophy and Technology 37 (1):1-22.
    Fairness in machine learning (ML) is an ever-growing field of research due to the manifold potential for harm from algorithmic discrimination. To prevent such harm, a large body of literature develops new approaches to quantify fairness. Here, we investigate how one can divert the quantification of fairness by describing a practice we call “fairness hacking” for the purpose of shrouding unfairness in algorithms. This impacts end-users who rely on learning algorithms, as well as (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28.  14
    Principle-based recommendations for big data and machine learning in food safety: the P-SAFETY model.Salvatore Sapienza & Anton Vedder - 2023 - AI and Society 38 (1):5-20.
    Big data and Machine learning Techniques are reshaping the way in which food safety risk assessment is conducted. The ongoing ‘datafication’ of food safety risk assessment activities and the progressive deployment of probabilistic models in their practices requires a discussion on the advantages and disadvantages of these advances. In particular, the low level of trust in EU food safety risk assessment framework highlighted in 2019 by an EU-funded survey could be exacerbated by novel methods of analysis. The variety (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  29. Just Machines.Clinton Castro - 2022 - Public Affairs Quarterly 36 (2):163-183.
    A number of findings in the field of machine learning have given rise to questions about what it means for automated scoring- or decisionmaking systems to be fair. One center of gravity in this discussion is whether such systems ought to satisfy classification parity (which requires parity in accuracy across groups, defined by protected attributes) or calibration (which requires similar predictions to have similar meanings across groups, defined by protected attributes). Central to this discussion are impossibility results, owed (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  30. Detecting racial bias in algorithms and machine learning.Nicol Turner Lee - 2018 - Journal of Information, Communication and Ethics in Society 16 (3):252-260.
    Purpose The online economy has not resolved the issue of racial bias in its applications. While algorithms are procedures that facilitate automated decision-making, or a sequence of unambiguous instructions, bias is a byproduct of these computations, bringing harm to historically disadvantaged populations. This paper argues that algorithmic biases explicitly and implicitly harm racial groups and lead to forms of discrimination. Relying upon sociological and technical research, the paper offers commentary on the need for more workplace diversity within high-tech industries and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  31.  19
    Ethical Redress of Racial Inequities in AI: Lessons from Decoupling Machine Learning from Optimization in Medical Appointment Scheduling.Robert Shanklin, Michele Samorani, Shannon Harris & Michael A. Santoro - 2022 - Philosophy and Technology 35 (4):1-19.
    An Artificial Intelligence algorithm trained on data that reflect racial biases may yield racially biased outputs, even if the algorithm on its own is unbiased. For example, algorithms used to schedule medical appointments in the USA predict that Black patients are at a higher risk of no-show than non-Black patients, though technically accurate given existing data that prediction results in Black patients being overwhelmingly scheduled in appointment slots that cause longer wait times than non-Black patients. This perpetuates racial inequity, in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  32. Algorithmic bias and the Value Sensitive Design approach.Judith Simon, Pak-Hang Wong & Gernot Rieder - 2020 - Internet Policy Review 9 (4).
    Recently, amid growing awareness that computer algorithms are not neutral tools but can cause harm by reproducing and amplifying bias, attempts to detect and prevent such biases have intensified. An approach that has received considerable attention in this regard is the Value Sensitive Design (VSD) methodology, which aims to contribute to both the critical analysis of (dis)values in existing technologies and the construction of novel technologies that account for specific desired values. This article provides a brief overview of the key (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  33.  13
    Research on Chinese Consumers’ Attitudes Analysis of Big-Data Driven Price Discrimination Based on Machine Learning.Jun Wang, Tao Shu, Wenjin Zhao & Jixian Zhou - 2022 - Frontiers in Psychology 12:803212.
    From the end of 2018 in China, the Big-data Driven Price Discrimination (BDPD) of online consumption raised public debate on social media. To study the consumers’ attitude about the BDPD, this study constructed a semantic recognition frame to deconstruct the Affection-Behavior-Cognition (ABC) consumer attitude theory using machine learning models inclusive of the Labeled Latent Dirichlet Allocation (LDA), Long Short-Term Memory (LSTM), and Snow Natural Language Processing (NLP), based on social media comments text dataset. Similar to the questionnaires published (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34.  5
    Writing assistant scoring system for English second language learners based on machine learning.Jianlan Lyu - 2022 - Journal of Intelligent Systems 31 (1):271-288.
    To reduce the workload of paper evaluation and improve the fairness and accuracy of the evaluation process, a writing assistant scoring system for English as a Foreign Language (EFL) learners is designed based on the principle of machine learning. According to the characteristics of the data processing process and the advantages and disadvantages of the Browser/server (B/s) structure, the equipment structure design of the project online evaluation teaching auxiliary system is further optimized. The panda method is used (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  35.  46
    Design of English hierarchical online test system based on machine learning.Chaman Verma, Shaweta Khanna, Sudeep Asthana, Abhinav Asthana, Dan Zhang & Xiahui Wang - 2021 - Journal of Intelligent Systems 30 (1):793-807.
    Large amount of data are exchanged and the internet is turning into twenty-first century Silk Road for data. Machine learning (ML) is the new area for the applications. The artificial intelligence (AI) is the field providing machines with intelligence. In the last decades, more developments have been made in the field of ML and deep learning. The technology and other advanced algorithms are implemented into more computational constrained devices. The online English test system based on ML breaks (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  36.  13
    An Enhanced Machine Learning Framework for Type 2 Diabetes Classification Using Imbalanced Data with Missing Values.Kumarmangal Roy, Muneer Ahmad, Kinza Waqar, Kirthanaah Priyaah, Jamel Nebhen, Sultan S. Alshamrani, Muhammad Ahsan Raza & Ihsan Ali - 2021 - Complexity 2021:1-21.
    Diabetes is one of the most common metabolic diseases that cause high blood sugar. Early diagnosis of such a condition is challenging due to its complex interdependence on various factors. There is a need to develop critical decision support systems to assist medical practitioners in the diagnosis process. This research proposes developing a predictive model that can achieve a high classification accuracy of type 2 diabetes. The study consisted of two fundamental parts. Firstly, the study investigated handling missing data adopting (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37.  12
    Ethical implications of fairness interventions: what might be hidden behind engineering choices?Julian Alfredo Mendez, Rüya Gökhan Koçer, Flavia Barsotti & Andrea Aler Tubella - 2022 - Ethics and Information Technology 24 (1).
    The importance of fairness in machine learning models is widely acknowledged, and ongoing academic debate revolves around how to determine the appropriate fairness definition, and how to tackle the trade-off between fairness and model performance. In this paper we argue that besides these concerns, there can be ethical implications behind seemingly purely technical choices in fairness interventions in a typical model development pipeline. As an example we show that the technical choice between in-processing and (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  38. What's Wrong with Machine Bias.Clinton Castro - 2019 - Ergo: An Open Access Journal of Philosophy 6.
    Data-driven, decision-making technologies used in the justice system to inform decisions about bail, parole, and prison sentencing are biased against historically marginalized groups (Angwin, Larson, Mattu, & Kirchner 2016). But these technologies’ judgments—which reproduce patterns of wrongful discrimination embedded in the historical datasets that they are trained on—are well-evidenced. This presents a puzzle: how can we account for the wrong these judgments engender without also indicting morally permissible statistical inferences about persons? I motivate this puzzle and attempt an answer.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  39.  52
    On the Advantages of Distinguishing Between Predictive and Allocative Fairness in Algorithmic Decision-Making.Fabian Beigang - 2022 - Minds and Machines 32 (4):655-682.
    The problem of algorithmic fairness is typically framed as the problem of finding a unique formal criterion that guarantees that a given algorithmic decision-making procedure is morally permissible. In this paper, I argue that this is conceptually misguided and that we should replace the problem with two sub-problems. If we examine how most state-of-the-art machine learning systems work, we notice that there are two distinct stages in the decision-making process. First, a prediction of a relevant property is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  40. Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  41. The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):231–237.
    Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performance parity criteria do (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42.  25
    Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings.Georgina Curto, Mario Fernando Jojoa Acosta, Flavio Comim & Begoña Garcia-Zapirain - forthcoming - AI and Society:1-16.
    Among the myriad of technical approaches and abstract guidelines proposed to the topic of AI bias, there has been an urgent call to translate the principle of fairness into the operational AI reality with the involvement of social sciences specialists to analyse the context of specific types of bias, since there is not a generalizable solution. This article offers an interdisciplinary contribution to the topic of AI and societal bias, in particular against the poor, providing a conceptual framework of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Algorithmic Fairness from a Non-ideal Perspective.Sina Fazelpour & Zachary C. Lipton - 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
    Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  44.  22
    Fairness & friends in the data science era.Barbara Catania, Giovanna Guerrini & Chiara Accinelli - 2023 - AI and Society 38 (2):721-731.
    The data science era is characterized by data-driven automated decision systems (ADS) enabling, through data analytics and machine learning, automated decisions in many contexts, deeply impacting our lives. As such, their downsides and potential risks are becoming more and more evident: technical solutions, alone, are not sufficient and an interdisciplinary approach is needed. Consequently, ADS should evolve into data-informed ADS, which take humans in the loop in all the data processing steps. Data-informed ADS should deal with data responsibly, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  45. Fair, Transparent, and Accountable Algorithmic Decision-making Processes: The Premise, the Proposed Solutions, and the Open Challenges.Bruno Lepri, Nuria Oliver, Emmanuel Letouzé, Alex Pentland & Patrick Vinck - 2018 - Philosophy and Technology 31 (4):611-627.
    The combination of increased availability of large amounts of fine-grained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In this (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   48 citations  
  46. Algorithmic Fairness and the Situated Dynamics of Justice.Sina Fazelpour, Zachary C. Lipton & David Danks - 2022 - Canadian Journal of Philosophy 52 (1):44-60.
    Machine learning algorithms are increasingly used to shape high-stake allocations, sparking research efforts to orient algorithm design towards ideals of justice and fairness. In this research on algorithmic fairness, normative theorizing has primarily focused on identification of “ideally fair” target states. In this paper, we argue that this preoccupation with target states in abstraction from the situated dynamics of deployment is misguided. We propose a framework that takes dynamic trajectories as direct objects of moral appraisal, highlighting (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  47. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  48. Legitimate Power, Illegitimate Automation: The problem of ignoring legitimacy in automated decision systems.Jake Iain Stone & Brent Mittelstadt - forthcoming - The Association for Computing Machinery Conference on Fairness, Accountability, and Transparency 2024.
    Progress in machine learning and artificial intelligence has spurred the widespread adoption of automated decision systems (ADS). An extensive literature explores what conditions must be met for these systems' decisions to be fair. However, questions of legitimacy -- why those in control of ADS are entitled to make such decisions -- have received comparatively little attention. This paper shows that when such questions are raised theorists often incorrectly conflate legitimacy with either public acceptance or other substantive values such (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  49. Disability, fairness, and algorithmic bias in AI recruitment.Nicholas Tilmes - 2022 - Ethics and Information Technology 24 (2).
    While rapid advances in artificial intelligence hiring tools promise to transform the workplace, these algorithms risk exacerbating existing biases against marginalized groups. In light of these ethical issues, AI vendors have sought to translate normative concepts such as fairness into measurable, mathematical criteria that can be optimized for. However, questions of disability and access often are omitted from these ongoing discussions about algorithmic bias. In this paper, I argue that the multiplicity of different kinds and intensities of people’s disabilities (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  50. Performance vs. competence in human–machine comparisons.Chaz Firestone - 2020 - Proceedings of the National Academy of Sciences 41.
    Does the human mind resemble the machines that can behave like it? Biologically inspired machine-learning systems approach “human-level” accuracy in an astounding variety of domains, and even predict human brain activity—raising the exciting possibility that such systems represent the world like we do. However, even seemingly intelligent machines fail in strange and “unhumanlike” ways, threatening their status as models of our minds. How can we know when human–machine behavioral differences reflect deep disparities in their underlying capacities, vs. (...)
    Direct download  
     
    Export citation  
     
    Bookmark   9 citations  
1 — 50 / 976