About this topic
Summary Ethical issues associated with AI are proliferating and rising to popular attention as machines engineered to perform tasks traditionally requiring biological intelligence become ubiquitous. Consider that civil infrastructure including energy grids and mass-transit systems are increasingly moderated by increasingly intelligent machines. Ethical issues include those of responsibility and/or blameworthiness of such systems, with implications for engineers who must responsibly design them, and philosophers who must interpret impacts - both potential and actual - in order to advise ethical designers. For example, who or what is responsible in the case of an accident due to an AI system error, or due to design flaws, or due to proper operation outside of anticipated constraints, such as part of a semi-autonomous automobile or actuarial algorithm? These are issues falling under the heading of Ethics of AI, as well as to other categories, e.g. those dedicated to autonomous vehicles, algorithmic fairness or artificial system safety. Finally, as AIs become increasingly intelligent, there seems some legitimate concern over the potential for AIs to manage human systems according to AI values, rather than as directly programmed by human designers. These concerns call into question the long-term safety of intelligent systems, not only for individual human beings, but for the human race and life on Earth as a whole. These issues and many others are central to Ethics of AI, and works focusing on such ideas can be found here. 
Key works Some works: Bostrom manuscriptMüller 2014, Müller 2016, Etzioni & Etzioni 2017, Dubber et al 2020, Tasioulas 2019, Müller 2021
Introductions Müller 2013, Gunkel 2012, Coeckelbergh 2020, Gordon et al 2021, Müller 2022Jecker & Nakazawa 2022, Mao & Shi-Kupfer 2023, Dietrich et al 2021, see also  https://plato.stanford.edu/entries/ethics-ai/
Related

Contents
2665 found
Order:
1 — 50 / 2665
Material to categorize
  1. Digitally Scaffolded Vulnerability: Facebook’s Recommender System as an Affective Scaffold and a Tool for Mind Invasion.Giacomo Figà-Talamanca - forthcoming - Topoi.
    I aim to illustrate how the recommender systems of digital platforms create a particularly problematic kind of vulnerability in their users. Specifically, through theories of scaffolded cognition and scaffolded affectivity, I argue that a digital platform’s recommender system is a cognitive and affective artifact that fulfills different functions for the platform’s users and its designers. While it acts as a content provider and facilitator of cognitive, affective and decision-making processes for users, it also provides a continuous and detailed amount of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
Algorithmic Fairness
  1. Redefining the psychological contract in the digital era: issues for research and practice.Sarah Bankins & Paul Formosa (eds.) - 2021 - Cham, Switzerland:
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  2. Gradual (in) compatibility of fairness criteria.Hertweck Corinna & Tim Räz - 2022 - Proceedings of the AAAI Conference on Artificial Intelligence 36 (11):11926-11934.
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  3. Legitimate Power, Illegitimate Automation: The problem of ignoring legitimacy in automated decision systems.Jake Iain Stone & Brent Mittelstadt - forthcoming - The Association for Computing Machinery Conference on Fairness, Accountability, and Transparency 2024.
    Progress in machine learning and artificial intelligence has spurred the widespread adoption of automated decision systems (ADS). An extensive literature explores what conditions must be met for these systems' decisions to be fair. However, questions of legitimacy -- why those in control of ADS are entitled to make such decisions -- have received comparatively little attention. This paper shows that when such questions are raised theorists often incorrectly conflate legitimacy with either public acceptance or other substantive values such as fairness, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility Gap.Tianqi Kou - manuscript
    Two goals - improving replicability and accountability of Machine Learning research respectively, have accrued much attention from the AI ethics and the Machine Learning community. Despite sharing the measures of improving transparency, the two goals are discussed in different registers - replicability registers with scientific reasoning whereas accountability registers with ethical reasoning. Given the existing challenge of the Responsibility Gap - holding Machine Learning scientists accountable for Machine Learning harms due to them being far from sites of application, this paper (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5. Algorithms are not neutral: Bias in collaborative filtering.Catherine Stinson - 2022 - AI and Ethics 2 (4):763-770.
    When Artificial Intelligence (AI) is applied in decision-making that affects people’s lives, it is now well established that the outcomes can be biased or discriminatory. The question of whether algorithms themselves can be among the sources of bias has been the subject of recent debate among Artificial Intelligence researchers, and scholars who study the social impact of technology. There has been a tendency to focus on examples, where the data set used to train the AI is biased, and denial on (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  6. A Framework for Assurance Audits of Algorithmic Systems.Benjamin Lange, Khoa Lam, Borhane Hamelin, Davidovic Jovana, Shea Brown & Ali Hasan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    An increasing number of regulations propose the notion of ‘AI audits’ as an enforcement mechanism for achieving transparency and accountability for artificial intelligence (AI) systems. Despite some converging norms around various forms of AI auditing, auditing for the purpose of compliance and assurance currently have little to no agreed upon practices, procedures, taxonomies, and standards. We propose the ‘criterion audit’ as an operationalizable compliance and assurance external audit framework. We model elements of this approach after financial auditing practices, and argue (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. Are Algorithms Value-Free?Gabbrielle M. Johnson - 2023 - Journal Moral Philosophy 21 (1-2):1-35.
    As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from feminist philosophy of (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  8. Conformism, Ignorance & Injustice: AI as a Tool of Epistemic Oppression.Martin Miragoli - forthcoming - Episteme: A Journal of Social Epistemology.
    From music recommendation to assessment of asylum applications, machine-learning algorithms play a fundamental role in our lives. Naturally, the rise of AI implementation strategies has brought to public attention the ethical risks involved. However, the dominant anti-discrimination discourse, too often preoccupied with identifying particular instances of harmful AIs, has yet to bring clearly into focus the more structural roots of AI-based injustice. This paper addresses the problem of AI-based injustice from a distinctively epistemic angle. More precisely, I argue that the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. Criteria for Assessing AI-Based Sentencing Algorithms: A Reply to Ryberg.Thomas Douglas - 2024 - Philosophy and Technology 37 (1):1-4.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10. An Impossibility Theorem for Base Rate Tracking and Equalised Odds.Rush T. Stewart, Benjamin Eva, Shanna Slank & Reuben Stern - forthcoming - Analysis.
    There is a theorem that shows that it is impossible for an algorithm to jointly satisfy the statistical fairness criteria of Calibration and Equalised Odds non-trivially. But what about the recently advocated alternative to Calibration, Base Rate Tracking? Here, we show that Base Rate Tracking is strictly weaker than Calibration, and then take up the question of whether it is possible to jointly satisfy Base Rate Tracking and Equalised Odds in non-trivial scenarios. We show that it is not, thereby establishing (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  11. Big Data as Tracking Technology and Problems of the Group and its Members.Haleh Asgarinia - 2023 - In Kevin Macnish & Adam Henschke (eds.), The Ethics of Surveillance in Times of Emergency. Oxford University Press. pp. 60-75.
    Digital data help data scientists and epidemiologists track and predict outbreaks of disease. Mobile phone GPS data, social media data, or other forms of information updates such as the progress of epidemics are used by epidemiologists to recognize disease spread among specific groups of people. Targeting groups as potential carriers of a disease, rather than addressing individuals as patients, risks causing harm to groups. While there are rules and obligations at the level of the individual, we have to reach a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12. Bare statistical evidence and the legitimacy of software-based judicial decisions.Eva Schmidt, Maximilian Köhl & Andreas Sesing-Wagenpfeil - 2023 - Synthese 201 (4):1-27.
    Can the evidence provided by software systems meet the standard of proof for civil or criminal cases, and is it individualized evidence? Or, to the contrary, do software systems exclusively provide bare statistical evidence? In this paper, we argue that there are cases in which evidence in the form of probabilities computed by software systems is not bare statistical evidence, and is thus able to meet the standard of proof. First, based on the case of State v. Loomis, we investigate (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  13. Algorithmic Profiling as a Source of Hermeneutical Injustice.Silvia Milano & Carina Prunkl - forthcoming - Philosophical Studies:1-19.
    It is well-established that algorithms can be instruments of injustice. It is less frequently discussed, however, how current modes of AI deployment often make the very discovery of injustice difficult, if not impossible. In this article, we focus on the effects of algorithmic profiling on epistemic agency. We show how algorithmic profiling can give rise to epistemic injustice through the depletion of epistemic resources that are needed to interpret and evaluate certain experiences. By doing so, we not only demonstrate how (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Algorithmic Transparency and Manipulation.Michael Klenk - 2023 - Philosophy and Technology 36 (4):1-20.
    A series of recent papers raises worries about the manipulative potential of algorithmic transparency (to wit, making visible the factors that influence an algorithm’s output). But while the concern is apt and relevant, it is based on a fraught understanding of manipulation. Therefore, this paper draws attention to the ‘indifference view’ of manipulation, which explains better than the ‘vulnerability view’ why algorithmic transparency has manipulative potential. The paper also raises pertinent research questions for future studies of manipulation in the context (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  15. An Epistemic Lens on Algorithmic Fairness.Elizabeth Edenberg & Alexandra Wood - 2023 - Eaamo '23: Proceedings of the 3Rd Acm Conference on Equity and Access in Algorithms, Mechanisms, and Optimization.
    In this position paper, we introduce a new epistemic lens for analyzing algorithmic harm. We argue that the epistemic lens we propose herein has two key contributions to help reframe and address some of the assumptions underlying inquiries into algorithmic fairness. First, we argue that using the framework of epistemic injustice helps to identify the root causes of harms currently framed as instances of representational harm. We suggest that the epistemic lens offers a theoretical foundation for expanding approaches to algorithmic (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  16. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic bias are hampered by conflations (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  17. Markets, market algorithms, and algorithmic bias.Philippe van Basshuysen - 2022 - Journal of Economic Methodology 30 (4):310-321.
    Where economists previously viewed the market as arising from a ‘spontaneous order’, antithetical to design, they now design markets to achieve specific purposes. This paper reconstructs how this change in what markets are and can do came about and considers some consequences. Two decisive developments in economic theory are identified: first, Hurwicz’s view of institutions as mechanisms, which should be designed to align incentives with social goals; and second, the notion of marketplaces – consisting of infrastructure and algorithms – which (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  18. Fair equality of chances for prediction-based decisions.Michele Loi, Anders Herlitz & Hoda Heidari - forthcoming - Economics and Philosophy:1-24.
    This article presents a fairness principle for evaluating decision-making based on predictions: a decision rule is unfair when the individuals directly impacted by the decisions who are equal with respect to the features that justify inequalities in outcomes do not have the same statistical prospects of being benefited or harmed by them, irrespective of their socially salient morally arbitrary traits. The principle can be used to evaluate prediction-based decision-making from the point of view of a wide range of antecedently specified (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Shared decision-making and maternity care in the deep learning age: Acknowledging and overcoming inherited defeaters.Keith Begley, Cecily Begley & Valerie Smith - 2021 - Journal of Evaluation in Clinical Practice 27 (3):497–503.
    In recent years there has been an explosion of interest in Artificial Intelligence (AI) both in health care and academic philosophy. This has been due mainly to the rise of effective machine learning and deep learning algorithms, together with increases in data collection and processing power, which have made rapid progress in many areas. However, use of this technology has brought with it philosophical issues and practical problems, in particular, epistemic and ethical. In this paper the authors, with backgrounds in (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20. ChatGPT’s Responses to Dilemmas in Medical Ethics: The Devil is in the Details.Lukas J. Meier - 2023 - American Journal of Bioethics 23 (10):63-65.
    In their Target Article, Rahimzadeh et al. (2023) discuss the virtues and vices of employing ChatGPT in ethics education for healthcare professionals. To this end, they confront the chatbot with a moral dilemma and analyse its response. In interpreting the case, ChatGPT relies on Beauchamp and Childress’ four prima-facie principles: beneficence, non-maleficence, respect for patient autonomy, and justice. While the chatbot’s output appears admirable at first sight, it is worth taking a closer look: ChatGPT not only misses the point when (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  21. What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  22. Beneficent Intelligence: A Capability Approach to Modeling Benefit, Assistance, and Associated Moral Failures through AI Systems.Alex John London & Hoda Heidari - manuscript
    The prevailing discourse around AI ethics lacks the language and formalism necessary to capture the diverse ethical concerns that emerge when AI systems interact with individuals. Drawing on Sen and Nussbaum's capability approach, we present a framework formalizing a network of ethical concepts and entitlements necessary for AI systems to confer meaningful benefit or assistance to stakeholders. Such systems enhance stakeholders' ability to advance their life plans and well-being while upholding their fundamental rights. We characterize two necessary conditions for morally (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  23. ACROCPoLis: A Descriptive Framework for Making Sense of Fairness.Andrea Aler Tubella, Dimitri Coelho Mollo, Adam Dahlgren, Hannah Devinney, Virginia Dignum, Petter Ericson, Anna Jonsson, Tim Kampik, Tom Lenaerts, Julian Mendez & Juan Carlos Nieves Sanchez - 2023 - Proceedings of the 2023 Acm Conference on Fairness, Accountability, and Transparency:1014-1025.
    Fairness is central to the ethical and responsible development and use of AI systems, with a large number of frameworks and formal notions of algorithmic fairness being available. However, many of the fairness solutions proposed revolve around technical considerations and not the needs of and consequences for the most impacted communities. We therefore want to take the focus away from definitions and allow for the inclusion of societal and relational aspects to represent how the effects of AI systems impact and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  24. Artificial intelligence ELSI score for science and technology: a comparison between Japan and the US.Tilman Hartwig, Yuko Ikkatai, Naohiro Takanashi & Hiromi M. Yokoyama - 2023 - AI and Society 38 (4):1609-1626.
    Artificial intelligence (AI) has become indispensable in our lives. The development of a quantitative scale for AI ethics is necessary for a better understanding of public attitudes toward AI research ethics and to advance the discussion on using AI within society. For this study, we developed an AI ethics scale based on AI-specific scenarios. We investigated public attitudes toward AI ethics in Japan and the US using online questionnaires. We designed a test set using four dilemma scenarios and questionnaire items (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  25. Meta’s Oversight Board: A Review and Critical Assessment.David Wong & Luciano Floridi - 2023 - Minds and Machines 33 (2):261-284.
    Since the announcement and establishment of the Oversight Board (OB) by the technology company Meta as an independent institution reviewing Facebook and Instagram’s content moderation decisions, the OB has been subjected to scholarly scrutiny ranging from praise to criticism. However, there is currently no overarching framework for understanding the OB’s various strengths and weaknesses. Consequently, this article analyses, organises, and supplements academic literature, news articles, and Meta and OB documents to understand the OB’s strengths and weaknesses and how it can (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  26. Informational richness and its impact on algorithmic fairness.Marcello Di Bello & Ruobin Gong - forthcoming - Philosophical Studies:1-29.
    The literature on algorithmic fairness has examined exogenous sources of biases such as shortcomings in the data and structural injustices in society. It has also examined internal sources of bias as evidenced by a number of impossibility theorems showing that no algorithm can concurrently satisfy multiple criteria of fairness. This paper contributes to the literature stemming from the impossibility theorems by examining how informational richness affects the accuracy and fairness of predictive algorithms. With the aid of a computer simulation, we (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Dirty data labeled dirt cheap: epistemic injustice in machine learning systems.Gordon Hull - 2023 - Ethics and Information Technology 25 (3):1-14.
    Artificial intelligence (AI) and machine learning (ML) systems increasingly purport to deliver knowledge about people and the world. Unfortunately, they also seem to frequently present results that repeat or magnify biased treatment of racial and other vulnerable minorities. This paper proposes that at least some of the problems with AI’s treatment of minorities can be captured by the concept of epistemic injustice. To substantiate this claim, I argue that (1) pretrial detention and physiognomic AI systems commit testimonial injustice because their (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  28. Bias Optimizers.Damien P. Williams - 2023 - American Scientist 111 (4):204-207.
  29. Algorithmic legitimacy in clinical decision-making.Sune Holm - 2023 - Ethics and Information Technology 25 (3):1-10.
    Machine learning algorithms are expected to improve referral decisions. In this article I discuss the legitimacy of deferring referral decisions in primary care to recommendations from such algorithms. The standard justification for introducing algorithmic decision procedures to make referral decisions is that they are more accurate than the available practitioners. The improvement in accuracy will ensure more efficient use of scarce health resources and improve patient care. In this article I introduce a proceduralist framework for discussing the legitimacy of algorithmic (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use.Joachim Baumann & Michele Loi - 2023 - Philosophy and Technology 36 (3):1-31.
    Algorithmic predictions are promising for insurance companies to develop personalized risk models for determining premiums. In this context, issues of fairness, discrimination, and social injustice might arise: Algorithms for estimating the risk based on personal data may be biased towards specific social groups, leading to systematic disadvantages for those groups. Personalized premiums may thus lead to discrimination and social injustice. It is well known from many application fields that such biases occur frequently and naturally when prediction models are applied to (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  31. Predictive policing and algorithmic fairness.Tzu-Wei Hung & Chun-Ping Yen - 2023 - Synthese 201 (6):1-29.
    This paper examines racial discrimination and algorithmic bias in predictive policing algorithms (PPAs), an emerging technology designed to predict threats and suggest solutions in law enforcement. We first describe what discrimination is in a case study of Chicago’s PPA. We then explain their causes with Broadbent’s contrastive model of causation and causal diagrams. Based on the cognitive science literature, we also explain why fairness is not an objective truth discoverable in laboratories but has context-sensitive social meanings that need to be (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  32. Apropos of "Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals".Ognjen Arandjelović - 2023 - AI and Ethics.
    The present comment concerns a recent AI & Ethics article which purports to report evidence of speciesist bias in various popular computer vision (CV) and natural language processing (NLP) machine learning models described in the literature. I examine the authors' analysis and show it, ironically, to be prejudicial, often being founded on poorly conceived assumptions and suffering from fallacious and insufficiently rigorous reasoning, its superficial appeal in large part relying on the sequacity of the article's target readership.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  33. Using (Un)Fair Algorithms in an Unjust World.Kasper Lippert-Rasmussen - 2022 - Res Publica 29 (2):283-302.
    Algorithm-assisted decision procedures—including some of the most high-profile ones, such as COMPAS—have been described as unfair because they compound injustice. The complaint is that in such procedures a decision disadvantaging members of a certain group is based on information reflecting the fact that the members of the group have already been unjustly disadvantaged. I assess this reasoning. First, I distinguish the anti-compounding duty from a related but distinct duty—the proportionality duty—from which at least some of the intuitive appeal of the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34. The Fairness in Algorithmic Fairness.Sune Holm - 2023 - Res Publica 29 (2):265-281.
    With the increasing use of algorithms in high-stakes areas such as criminal justice and health has come a significant concern about the fairness of prediction-based decision procedures. In this article I argue that a prominent class of mathematically incompatible performance parity criteria can all be understood as applications of John Broome’s account of fairness as the proportional satisfaction of claims. On this interpretation these criteria do not disagree on what it means for an algorithm to be _fair_. Rather they express (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  35. Correction: The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):339-340.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36. Künstliche Intelligenz: Fluch oder Segen?Jens Kipper - 2020 - Metzler.
    Künstliche Intelligenz (KI) ist heute schon ein fester Bestandteil unseres Lebens, auch wenn sie oft im Verborgenen wirkt. Wo wird diese Entwicklung hinführen und was wird das für uns bedeuten? Jens Kipper erklärt, wie moderne KI funktioniert, was sie heute schon kann und welche Auswirkungen ihre Verwendung in Waffensystemen, in der Medizin und Wissenschaft, im Arbeitsleben und anderswo haben wird. Kipper argumentiert dafür, dass die Entwicklung von KI zu großen gesellschaftlichen Umwälzungen führen wird. Er erläutert zudem, wovon es abhängt, dass (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37. Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms.Benedetta Giovanola & Simona Tiribelli - 2023 - AI and Society 38 (2):549-563.
    The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  38. Algorithmic fairness through group parities? The case of COMPAS-SAPMOC.Francesca Lagioia, Riccardo Rovatti & Giovanni Sartor - 2023 - AI and Society 38 (2):459-478.
    Machine learning classifiers are increasingly used to inform, or even make, decisions significantly affecting human lives. Fairness concerns have spawned a number of contributions aimed at both identifying and addressing unfairness in algorithmic decision-making. This paper critically discusses the adoption of group-parity criteria (e.g., demographic parity, equality of opportunity, treatment equality) as fairness standards. To this end, we evaluate the use of machine learning methods relative to different steps of the decision-making process: assigning a predictive score, linking a classification to (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39. From AI for people to AI for the world and the universe.Seth D. Baum & Andrea Owe - 2023 - AI and Society 38 (2):679-680.
    Recent work in AI ethics often calls for AI to advance human values and interests. The concept of “AI for people” is one notable example. Though commendable in some respects, this work falls short by excluding the moral significance of nonhumans. This paper calls for a shift in AI ethics to more inclusive paradigms such as “AI for the world” and “AI for the universe”. The paper outlines the case for more inclusive paradigms and presents implications for moral philosophy and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40. Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  41. Reconciling Algorithmic Fairness Criteria.Fabian Beigang - 2023 - Philosophy and Public Affairs 51 (2):166-190.
    Philosophy &Public Affairs, Volume 51, Issue 2, Page 166-190, Spring 2023.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  42. Investigating gender and racial biases in DALL-E Mini Images.Marc Cheong, Ehsan Abedin, Marinus Ferreira, Ritsaart Willem Reimann, Shalom Chalson, Pamela Robinson, Joanne Byrne, Leah Ruppanner, Mark Alfano & Colin Klein - forthcoming - Acm Journal on Responsible Computing.
    Generative artificial intelligence systems based on transformers, including both text-generators like GPT-4 and image generators like DALL-E 3, have recently entered the popular consciousness. These tools, while impressive, are liable to reproduce, exacerbate, and reinforce extant human social biases, such as gender and racial biases. In this paper, we systematically review the extent to which DALL-E Mini suffers from this problem. In line with the Model Card published alongside DALL-E Mini by its creators, we find that the images it produces (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  43. Ethics and Artificial Intelligence in Public Health Social Work.David Gray Grant - 2018 - In Milind Tambe & Eric Rice (eds.), Artificial Intelligence and Social Work. Cambridge University Press.
  44. Equalized Odds is a Requirement of Algorithmic Fairness.David Gray Grant - 2023 - Synthese 201 (3).
    Statistical criteria of fairness are formal measures of how an algorithm performs that aim to help us determine whether an algorithm would be fair to use in decision-making. In this paper, I introduce a new version of the criterion known as “Equalized Odds,” argue that it is a requirement of procedural fairness, and show that it is immune to a number of objections to the standard version.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  45. (Un)Fairness in AI: An Intersectional Feminist Analysis.Youjin Kong - 2022 - Blog of the American Philosophical Association, Women in Philosophy Series.
    Racial, Gender, and Intersectional Biases in AI / -/- Dominant View of Intersectional Fairness in the AI Literature / -/- Three Fundamental Problems with the Dominant View / 1. Overemphasis on Intersections of Attributes / 2. Dilemma between Infinite Regress and Fairness Gerrymandering / 3. Narrow Understanding of Fairness as Parity / -/- Rethinking AI Fairness: from Weak to Strong Fairness.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  46. Are “Intersectionally Fair” AI Algorithms Really Fair to Women of Color? A Philosophical Analysis.Youjin Kong - 2022 - Facct: Proceedings of the Acm Conference on Fairness, Accountability, and Transparency:485-494.
    A growing number of studies on fairness in artificial intelligence (AI) use the notion of intersectionality to measure AI fairness. Most of these studies take intersectional fairness to be a matter of statistical parity among intersectional subgroups: an AI algorithm is “intersectionally fair” if the probability of the outcome is roughly the same across all subgroups defined by different combinations of the protected attributes. This paper identifies and examines three fundamental problems with this dominant interpretation of intersectional fairness in AI. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  47. Correction to: Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic Fairness.Ben Green - 2023 - Philosophy and Technology 36 (1):1-1.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  48. Having Your Day in Robot Court.Benjamin Chen, Alexander Stremitzer & Kevin Tobia - 2023 - Harvard Journal of Law and Technology 36.
    Should machines be judges? Some say no, arguing that citizens would see robot-led legal proceedings as procedurally unfair because “having your day in court” is having another human adjudicate your claims. Prior research established that people obey the law in part because they see it as procedurally just. The introduction of artificially intelligent (AI) judges could therefore undermine sentiments of justice and legal compliance if citizens intuitively take machine-adjudicated proceedings to be less fair than the human-adjudicated status quo. Two original (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  49. Measurement invariance, selection invariance, and fair selection revisited.Remco Heesen & Jan-Willem Romeijn - 2023 - Psychological Methods 28 (3):687-690.
    This note contains a corrective and a generalization of results by Borsboom et al. (2008), based on Heesen and Romeijn (2019). It highlights the relevance of insights from psychometrics beyond the context of psychological testing.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 2665