Results for 'Fair Machine Learning'

990 found
Order:
  1. Fair machine learning under partial compliance.Jessica Dai, Sina Fazelpour & Zachary Lipton - 2021 - In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. pp. 55–65.
    Typically, fair machine learning research focuses on a single decision maker and assumes that the underlying population is stationary. However, many of the critical domains motivating this work are characterized by competitive marketplaces with many decision makers. Realistically, we might expect only a subset of them to adopt any non-compulsory fairness-conscious policy, a situation that political philosophers call partial compliance. This possibility raises important questions: how does partial compliance and the consequent strategic behavior of decision subjects affect (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  19
    Non-empirical problems in fair machine learning.Teresa Scantamburlo - 2021 - Ethics and Information Technology 23 (4):703-712.
    The problem of fair machine learning has drawn much attention over the last few years and the bulk of offered solutions are, in principle, empirical. However, algorithmic fairness also raises important conceptual issues that would fail to be addressed if one relies entirely on empirical considerations. Herein, I will argue that the current debate has developed an empirical framework that has brought important contributions to the development of algorithmic decision-making, such as new techniques to discover and prevent (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  3.  98
    Fairness in Machine Learning: Against False Positive Rate Equality as a Measure of Fairness.Robert Long - 2021 - Journal of Moral Philosophy 19 (1):49-78.
    As machine learning informs increasingly consequential decisions, different metrics have been proposed for measuring algorithmic bias or unfairness. Two popular “fairness measures” are calibration and equality of false positive rate. Each measure seems intuitively important, but notably, it is usually impossible to satisfy both measures. For this reason, a large literature in machine learning speaks of a “fairness tradeoff” between these two measures. This framing assumes that both measures are, in fact, capturing something important. To date, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  4.  11
    Practicing trustworthy machine learning: consistent, transparent, and fair AI pipelines.Yada Pruksachatkun - 2022 - Boston: O'Reilly. Edited by Matthew McAteer & Subhabrata Majumdar.
    With the increasing use of AI in high-stakes domains such as medicine, law, and defense, organizations spend a lot of time and money to make ML models trustworthy. Many books on the subject offer deep dives into theories and concepts. This guide provides a practical starting point to help development teams produce models that are secure, more robust, less biased, and more explainable. Authors Yada Pruksachatkun, Matthew McAteer, and Subhabrata Majumdar translate best practices in the academic literature for curating datasets (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  5. Egalitarian Machine Learning.Clinton Castro, David O’Brien & Ben Schwan - 2023 - Res Publica 29 (2):237–264.
    Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take ‘fairness’ in this context to be a placeholder for a (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  6.  3
    Sobre Fairness y Machine Learning: El Algoritmo ¿Puede (y Debe) Ser Justo?Nuria Belloso Martín - 2023 - Anales de la Cátedra Francisco Suárez 57:7-38.
    El uso cada vez más frecuente de la Inteligencia Artificial en el ámbito delDerecho, obliga a plantearse si las decisiones automatizadas pueden, y deben, ser justas. El algoritmo, en el Machine Learning, tiene la virtualidad de ir aprendiendo, lo que lo dota de un cierto grado de autonomía. Sesgos, discriminaciones y desigualdades que derivan de decisiones automatizadas, ponen al descubierto el mito del algoritmo justo. El criterio de justicia que se exige en la concepción analógica del Derecho también (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7.  49
    Do the Ends Justify the Means? Variation in the Distributive and Procedural Fairness of Machine Learning Algorithms.Lily Morse, Mike Horia M. Teodorescu, Yazeed Awwad & Gerald C. Kane - 2021 - Journal of Business Ethics 181 (4):1083-1095.
    Recent advances in machine learning methods have created opportunities to eliminate unfairness from algorithmic decision making. Multiple computational techniques (i.e., algorithmic fairness criteria) have arisen out of this work. Yet, urgent questions remain about the perceived fairness of these criteria and in which situations organizations should use them. In this paper, we seek to gain insight into these questions by exploring fairness perceptions of five algorithmic criteria. We focus on two key dimensions of fairness evaluations: distributive fairness and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  8.  36
    Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data.Reuben Binns & Michael Veale - 2017 - Big Data and Society 4 (2).
    Decisions based on algorithmic, machine learning models can be unfair, reproducing biases in historical data used to train them. While computational techniques are emerging to address aspects of these concerns through communities such as discrimination-aware data mining and fairness, accountability and transparency machine learning, their practical implementation faces real-world challenges. For legal, institutional or commercial reasons, organisations might not hold the data on sensitive attributes such as gender, ethnicity, sexuality or disability needed to diagnose and mitigate (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   16 citations  
  9.  31
    On Hedden's proof that machine learning fairness metrics are flawed.Anders Søgaard, Klemens Kappel & Thor Grünbaum - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    1. Fairness is about the just distribution of society's resources, and in ML, the main resource being distributed is model performance, e.g. the translation quality produced by machine translation...
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  52
    Enabling Fairness in Healthcare Through Machine Learning.Geoff Keeling & Thomas Grote - 2022 - Ethics and Information Technology 24 (3):1-13.
    The use of machine learning systems for decision-support in healthcare may exacerbate health inequalities. However, recent work suggests that algorithms trained on sufficiently diverse datasets could in principle combat health inequalities. One concern about these algorithms is that their performance for patients in traditionally disadvantaged groups exceeds their performance for patients in traditionally advantaged groups. This renders the algorithmic decisions unfair relative to the standard fairness metrics in machine learning. In this paper, we defend the permissible (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  11.  83
    Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms.Benedetta Giovanola & Simona Tiribelli - 2023 - AI and Society 38 (2):549-563.
    The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  12.  22
    Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13.  70
    Machine learning’s limitations in avoiding automation of bias.Daniel Varona, Yadira Lizama-Mue & Juan Luis Suárez - 2021 - AI and Society 36 (1):197-203.
    The use of predictive systems has become wider with the development of related computational methods, and the evolution of the sciences in which these methods are applied Solon and Selbst and Pedreschi et al.. The referred methods include machine learning techniques, face and/or voice recognition, temperature mapping, and other, within the artificial intelligence domain. These techniques are being applied to solve problems in socially and politically sensitive areas such as crime prevention and justice management, crowd management, and emotion (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14.  14
    Melting contestation: insurance fairness and machine learning.Laurence Barry & Arthur Charpentier - 2023 - Ethics and Information Technology 25 (4):1-13.
    With their intensive use of data to classify and price risk, insurers have often been confronted with data-related issues of fairness and discrimination. This paper provides a comparative review of discrimination issues raised by traditional statistics versus machine learning in the context of insurance. We first examine historical contestations of insurance classification, showing that it was organized along three types of bias: pure stereotypes, non-causal correlations, or causal effects that a society chooses to protect against, are thus the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15.  13
    Bridging the AI Chasm: Can EBM Address Representation and Fairness in Clinical Machine Learning?Nicole Martinez-Martin & Mildred K. Cho - 2022 - American Journal of Bioethics 22 (5):30-32.
    McCradden et al. propose to close the “AI chasm” between algorithms and clinically meaningful application using the norms of evidence-based medicine and clinical research, with the rat...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  16.  12
    SAF: Stakeholders’ Agreement on Fairness in the Practice of Machine Learning Development.Georgina Curto & Flavio Comim - 2023 - Science and Engineering Ethics 29 (4):1-19.
    This paper clarifies why bias cannot be completely mitigated in Machine Learning (ML) and proposes an end-to-end methodology to translate the ethical principle of justice and fairness into the practice of ML development as an ongoing agreement with stakeholders. The pro-ethical iterative process presented in the paper aims to challenge asymmetric power dynamics in the fairness decision making within ML design and support ML development teams to identify, mitigate and monitor bias at each step of ML systems development. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  17.  19
    Big Data Analytics in Healthcare: Exploring the Role of Machine Learning in Predicting Patient Outcomes and Improving Healthcare Delivery.Federico Del Giorgio Solfa & Fernando Rogelio Simonato - 2023 - International Journal of Computations Information and Manufacturing (Ijcim) 3 (1):1-9.
    Healthcare professionals decide wisely about personalized medicine, treatment plans, and resource allocation by utilizing big data analytics and machine learning. To guarantee that algorithmic recommendations are impartial and fair, however, ethical issues relating to prejudice and data privacy must be taken into account. Big data analytics and machine learning have a great potential to disrupt healthcare, and as these technologies continue to evolve, new opportunities to reform healthcare and enhance patient outcomes may arise. In order (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  18.  21
    Diversity in sociotechnical machine learning systems.Maria De-Arteaga & Sina Fazelpour - 2022 - Big Data and Society 9 (1).
    There has been a surge of recent interest in sociocultural diversity in machine learning research. Currently, however, there is a gap between discussions of measures and benefits of diversity in machine learning, on the one hand, and the broader research on the underlying concepts of diversity and the precise mechanisms of its functional benefits, on the other. This gap is problematic because diversity is not a monolithic concept. Rather, different concepts of diversity are based on distinct (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  19.  35
    Philosophical Inquiry into Computer Intentionality: Machine Learning and Value Sensitive Design.Dmytro Mykhailov - 2023 - Human Affairs 33 (1):115-127.
    Intelligent algorithms together with various machine learning techniques hold a dominant position among major challenges for contemporary value sensitive design. Self-learning capabilities of current AI applications blur the causal link between programmer and computer behavior. This creates a vital challenge for the design, development and implementation of digital technologies nowadays. This paper seeks to provide an account of this challenge. The main question that shapes the current analysis is the following: What conceptual tools can be developed within (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  20. On Predicting Recidivism: Epistemic Risk, Tradeoffs, and Values in Machine Learning.Justin B. Biddle - 2022 - Canadian Journal of Philosophy 52 (3):321-341.
    Recent scholarship in philosophy of science and technology has shown that scientific and technological decision making are laden with values, including values of a social, political, and/or ethical character. This paper examines the role of value judgments in the design of machine-learning systems generally and in recidivism-prediction algorithms specifically. Drawing on work on inductive and epistemic risk, the paper argues that ML systems are value laden in ways similar to human decision making, because the development and design of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  21.  56
    Dirty data labeled dirt cheap: epistemic injustice in machine learning systems.Gordon Hull - 2023 - Ethics and Information Technology 25 (3):1-14.
    Artificial intelligence (AI) and machine learning (ML) systems increasingly purport to deliver knowledge about people and the world. Unfortunately, they also seem to frequently present results that repeat or magnify biased treatment of racial and other vulnerable minorities. This paper proposes that at least some of the problems with AI’s treatment of minorities can be captured by the concept of epistemic injustice. To substantiate this claim, I argue that (1) pretrial detention and physiognomic AI systems commit testimonial injustice (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  22.  9
    Citizens’ data afterlives: Practices of dataset inclusion in machine learning for public welfare.Helene Friis Ratner & Nanna Bonde Thylstrup - forthcoming - AI and Society:1-11.
    Public sector adoption of AI techniques in welfare systems recasts historic national data as resource for machine learning. In this paper, we examine how the use of register data for development of predictive models produces new ‘afterlives’ for citizen data. First, we document a Danish research project’s practical efforts to develop an algorithmic decision-support model for social workers to classify children’s risk of maltreatment. Second, we outline the tensions emerging from project members’ negotiations about which datasets to include. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23. The Use and Misuse of Counterfactuals in Ethical Machine Learning.Atoosa Kasirzadeh & Andrew Smart - 2021 - In ACM Conference on Fairness, Accountability, and Transparency (FAccT 21).
    The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine learning fairness (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  24.  9
    An Enhanced Machine Learning Framework for Type 2 Diabetes Classification Using Imbalanced Data with Missing Values.Kumarmangal Roy, Muneer Ahmad, Kinza Waqar, Kirthanaah Priyaah, Jamel Nebhen, Sultan S. Alshamrani, Muhammad Ahsan Raza & Ihsan Ali - 2021 - Complexity 2021:1-21.
    Diabetes is one of the most common metabolic diseases that cause high blood sugar. Early diagnosis of such a condition is challenging due to its complex interdependence on various factors. There is a need to develop critical decision support systems to assist medical practitioners in the diagnosis process. This research proposes developing a predictive model that can achieve a high classification accuracy of type 2 diabetes. The study consisted of two fundamental parts. Firstly, the study investigated handling missing data adopting (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25.  9
    Research on Chinese Consumers’ Attitudes Analysis of Big-Data Driven Price Discrimination Based on Machine Learning.Jun Wang, Tao Shu, Wenjin Zhao & Jixian Zhou - 2022 - Frontiers in Psychology 12:803212.
    From the end of 2018 in China, the Big-data Driven Price Discrimination (BDPD) of online consumption raised public debate on social media. To study the consumers’ attitude about the BDPD, this study constructed a semantic recognition frame to deconstruct the Affection-Behavior-Cognition (ABC) consumer attitude theory using machine learning models inclusive of the Labeled Latent Dirichlet Allocation (LDA), Long Short-Term Memory (LSTM), and Snow Natural Language Processing (NLP), based on social media comments text dataset. Similar to the questionnaires published (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  26.  13
    Design of English hierarchical online test system based on machine learning.Chaman Verma, Shaweta Khanna, Sudeep Asthana, Abhinav Asthana, Dan Zhang & Xiahui Wang - 2021 - Journal of Intelligent Systems 30 (1):793-807.
    Large amount of data are exchanged and the internet is turning into twenty-first century Silk Road for data. Machine learning (ML) is the new area for the applications. The artificial intelligence (AI) is the field providing machines with intelligence. In the last decades, more developments have been made in the field of ML and deep learning. The technology and other advanced algorithms are implemented into more computational constrained devices. The online English test system based on ML breaks (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  27.  12
    Principle-based recommendations for big data and machine learning in food safety: the P-SAFETY model.Salvatore Sapienza & Anton Vedder - 2023 - AI and Society 38 (1):5-20.
    Big data and Machine learning Techniques are reshaping the way in which food safety risk assessment is conducted. The ongoing ‘datafication’ of food safety risk assessment activities and the progressive deployment of probabilistic models in their practices requires a discussion on the advantages and disadvantages of these advances. In particular, the low level of trust in EU food safety risk assessment framework highlighted in 2019 by an EU-funded survey could be exacerbated by novel methods of analysis. The variety (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28. Algorithmic Fairness from a Non-ideal Perspective.Sina Fazelpour & Zachary C. Lipton - 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
    Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  29. Detecting racial bias in algorithms and machine learning.Nicol Turner Lee - 2018 - Journal of Information, Communication and Ethics in Society 16 (3):252-260.
    Purpose The online economy has not resolved the issue of racial bias in its applications. While algorithms are procedures that facilitate automated decision-making, or a sequence of unambiguous instructions, bias is a byproduct of these computations, bringing harm to historically disadvantaged populations. This paper argues that algorithmic biases explicitly and implicitly harm racial groups and lead to forms of discrimination. Relying upon sociological and technical research, the paper offers commentary on the need for more workplace diversity within high-tech industries and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  30.  5
    Writing assistant scoring system for English second language learners based on machine learning.Jianlan Lyu - 2022 - Journal of Intelligent Systems 31 (1):271-288.
    To reduce the workload of paper evaluation and improve the fairness and accuracy of the evaluation process, a writing assistant scoring system for English as a Foreign Language (EFL) learners is designed based on the principle of machine learning. According to the characteristics of the data processing process and the advantages and disadvantages of the Browser/server (B/s) structure, the equipment structure design of the project online evaluation teaching auxiliary system is further optimized. The panda method is used to (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  31.  13
    Ethical Redress of Racial Inequities in AI: Lessons from Decoupling Machine Learning from Optimization in Medical Appointment Scheduling.Robert Shanklin, Michele Samorani, Shannon Harris & Michael A. Santoro - 2022 - Philosophy and Technology 35 (4):1-19.
    An Artificial Intelligence algorithm trained on data that reflect racial biases may yield racially biased outputs, even if the algorithm on its own is unbiased. For example, algorithms used to schedule medical appointments in the USA predict that Black patients are at a higher risk of no-show than non-Black patients, though technically accurate given existing data that prediction results in Black patients being overwhelmingly scheduled in appointment slots that cause longer wait times than non-Black patients. This perpetuates racial inequity, in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  32.  21
    Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings.Georgina Curto, Mario Fernando Jojoa Acosta, Flavio Comim & Begoña Garcia-Zapirain - forthcoming - AI and Society:1-16.
    Among the myriad of technical approaches and abstract guidelines proposed to the topic of AI bias, there has been an urgent call to translate the principle of fairness into the operational AI reality with the involvement of social sciences specialists to analyse the context of specific types of bias, since there is not a generalizable solution. This article offers an interdisciplinary contribution to the topic of AI and societal bias, in particular against the poor, providing a conceptual framework of the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Just Machines.Clinton Castro - 2022 - Public Affairs Quarterly 36 (2):163-183.
    A number of findings in the field of machine learning have given rise to questions about what it means for automated scoring- or decisionmaking systems to be fair. One center of gravity in this discussion is whether such systems ought to satisfy classification parity (which requires parity in accuracy across groups, defined by protected attributes) or calibration (which requires similar predictions to have similar meanings across groups, defined by protected attributes). Central to this discussion are impossibility results, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  34.  11
    Ethical food packaging and designed encounters with distant and exotic others.David Machin & Paul Cobley - 2020 - Semiotica 2020 (232):251-271.
    There has been criticism of how Fair-Trade products represent workers in remote parts of the world where packaging offers an encounter with distant others which romanticizes and homogenizes them as a pre-modern form of ethnicity. Such workers are shown as always engaged in authentic, simple, honest decontextualized manual labor. And they are depicted as highly appreciative of, and empowered by, the act of ethical shopping. This paper shows that a close social semiotic analysis of Fair-Trade packaging reveals a (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35. The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):231–237.
    Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performance parity criteria do not guarantee any (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36.  20
    Engaging Tomorrow’s Doctors in Clinical Ethics: Implications for Healthcare Organisations.Laura L. Machin & Robin D. Proctor - 2020 - Health Care Analysis 29 (4):319-342.
    Clinical ethics can be viewed as a practical discipline that provides a structured approach to assist healthcare practitioners in identifying, analysing and resolving ethical issues that arise in practice. Clinical ethics can therefore promote ethically sound clinical and organisational practices and decision-making, thereby contributing to health organisation and system quality improvement. In order to develop students’ decision-making skills, as well as prepare them for practice, we decided to introduce a clinical ethics strand within an undergraduate medical curriculum. We designed a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37.  8
    The misleading nature of flow charts and diagrams in organizational communication: The case of performance management of preschools in Sweden.David Machin & Per Ledin - 2020 - Semiotica 2020 (236-237):405-425.
    It has become common to find diagrams and flow-charts used in our organizations to illustrate the nature of processes, what is involved and how it happens, or to show how parts of the organization interrelate to each other and work together. Such diagrams are used as they are thought to help visualization and simplify things in order to represent the essence of a particular situation, the core features. In this paper, using a social semiotic approach, we show that we need (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38. Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  39. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and fairness (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  40. Algorithmic Fairness and the Situated Dynamics of Justice.Sina Fazelpour, Zachary C. Lipton & David Danks - 2022 - Canadian Journal of Philosophy 52 (1):44-60.
    Machine learning algorithms are increasingly used to shape high-stake allocations, sparking research efforts to orient algorithm design towards ideals of justice and fairness. In this research on algorithmic fairness, normative theorizing has primarily focused on identification of “ideally fair” target states. In this paper, we argue that this preoccupation with target states in abstraction from the situated dynamics of deployment is misguided. We propose a framework that takes dynamic trajectories as direct objects of moral appraisal, highlighting three (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  41.  26
    Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use.Joachim Baumann & Michele Loi - 2023 - Philosophy and Technology 36 (3):1-31.
    Algorithmic predictions are promising for insurance companies to develop personalized risk models for determining premiums. In this context, issues of fairness, discrimination, and social injustice might arise: Algorithms for estimating the risk based on personal data may be biased towards specific social groups, leading to systematic disadvantages for those groups. Personalized premiums may thus lead to discrimination and social injustice. It is well known from many application fields that such biases occur frequently and naturally when prediction models are applied to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  42. Disability, fairness, and algorithmic bias in AI recruitment.Nicholas Tilmes - 2022 - Ethics and Information Technology 24 (2).
    While rapid advances in artificial intelligence hiring tools promise to transform the workplace, these algorithms risk exacerbating existing biases against marginalized groups. In light of these ethical issues, AI vendors have sought to translate normative concepts such as fairness into measurable, mathematical criteria that can be optimized for. However, questions of disability and access often are omitted from these ongoing discussions about algorithmic bias. In this paper, I argue that the multiplicity of different kinds and intensities of people’s disabilities and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43.  4
    Deep Learning Opacity, and the Ethical Accountability of AI Systems. A New Perspective.Gianfranco Basti & Giuseppe Vitiello - 2023 - In Raffaela Giovagnoli & Robert Lowe (eds.), The Logic of Social Practices II. Springer Nature Switzerland. pp. 21-73.
    In this paper we analyse the conditions for attributing to AI autonomous systems the ontological status of “artificial moral agents”, in the context of the “distributed responsibility” between humans and machines in Machine Ethics (ME). In order to address the fundamental issue in ME of the unavoidable “opacity” of their decisions with ethical/legal relevance, we start from the neuroethical evidence in cognitive science. In humans, the “transparency” and then the “ethical accountability” of their actions as responsible moral agents is (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  44.  10
    Fairness Hacking: The Malicious Practice of Shrouding Unfairness in Algorithms.Kristof Meding & Thilo Hagendorff - 2024 - Philosophy and Technology 37 (1):1-22.
    Fairness in machine learning (ML) is an ever-growing field of research due to the manifold potential for harm from algorithmic discrimination. To prevent such harm, a large body of literature develops new approaches to quantify fairness. Here, we investigate how one can divert the quantification of fairness by describing a practice we call “fairness hacking” for the purpose of shrouding unfairness in algorithms. This impacts end-users who rely on learning algorithms, as well as the broader community interested (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  45. Fair, Transparent, and Accountable Algorithmic Decision-making Processes: The Premise, the Proposed Solutions, and the Open Challenges.Bruno Lepri, Nuria Oliver, Emmanuel Letouzé, Alex Pentland & Patrick Vinck - 2018 - Philosophy and Technology 31 (4):611-627.
    The combination of increased availability of large amounts of fine-grained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In this (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   48 citations  
  46.  67
    Machine Decisions and Human Consequences.Teresa Scantamburlo, Andrew Charlesworth & Nello Cristianini - 2019 - In Karen Yeung & Martin Lodge (eds.), Algorithmic Regulation. Oxford: Oxford University Press.
    As we increasingly delegate decision-making to algorithms, whether directly or indirectly, important questions emerge in circumstances where those decisions have direct consequences for individual rights and personal opportunities, as well as for the collective good. A key problem for policymakers is that the social implications of these new methods can only be grasped if there is an adequate comprehension of their general technical underpinnings. The discussion here focuses primarily on the case of enforcement decisions in the criminal justice system, but (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  47. Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics.Michelle Seng Ah Lee, Luciano Floridi & Jatinder Singh - 2021 - AI and Ethics 3.
    There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented in narrow and targeted (...)
    Direct download  
     
    Export citation  
     
    Bookmark   10 citations  
  48. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems.Kathleen Creel & Deborah Hellman - 2022 - Canadian Journal of Philosophy 52 (1):26-43.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to explain (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  49.  35
    Algorithmic fairness through group parities? The case of COMPAS-SAPMOC.Francesca Lagioia, Riccardo Rovatti & Giovanni Sartor - 2023 - AI and Society 38 (2):459-478.
    Machine learning classifiers are increasingly used to inform, or even make, decisions significantly affecting human lives. Fairness concerns have spawned a number of contributions aimed at both identifying and addressing unfairness in algorithmic decision-making. This paper critically discusses the adoption of group-parity criteria (e.g., demographic parity, equality of opportunity, treatment equality) as fairness standards. To this end, we evaluate the use of machine learning methods relative to different steps of the decision-making process: assigning a predictive score, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50. What's Wrong with Machine Bias.Clinton Castro - 2019 - Ergo: An Open Access Journal of Philosophy 6.
    Data-driven, decision-making technologies used in the justice system to inform decisions about bail, parole, and prison sentencing are biased against historically marginalized groups (Angwin, Larson, Mattu, & Kirchner 2016). But these technologies’ judgments—which reproduce patterns of wrongful discrimination embedded in the historical datasets that they are trained on—are well-evidenced. This presents a puzzle: how can we account for the wrong these judgments engender without also indicting morally permissible statistical inferences about persons? I motivate this puzzle and attempt an answer.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
1 — 50 / 990