About this topic
Summary Ethical issues associated with AI are proliferating and rising to popular attention as machines engineered to perform tasks traditionally requiring biological intelligence become ubiquitous. Consider that civil infrastructure including energy grids and mass-transit systems are increasingly moderated by increasingly intelligent machines. Ethical issues include those of responsibility and/or blameworthiness of such systems, with implications for engineers who must responsibly design them, and philosophers who must interpret impacts - both potential and actual - in order to advise ethical designers. For example, who or what is responsible in the case of an accident due to an AI system error, or due to design flaws, or due to proper operation outside of anticipated constraints, such as part of a semi-autonomous automobile or actuarial algorithm? These are issues falling under the heading of Ethics of AI, as well as to other categories, e.g. those dedicated to autonomous vehicles, algorithmic fairness or artificial system safety. Finally, as AIs become increasingly intelligent, there seems some legitimate concern over the potential for AIs to manage human systems according to AI values, rather than as directly programmed by human designers. These concerns call into question the long-term safety of intelligent systems, not only for individual human beings, but for the human race and life on Earth as a whole. These issues and many others are central to Ethics of AI, and works focusing on such ideas can be found here. 
Key works Some works: Bostrom manuscriptMüller 2014, Müller 2016, Etzioni & Etzioni 2017, Dubber et al 2020, Tasioulas 2019, Müller 2021
Introductions Müller 2013, Gunkel 2012, Coeckelbergh 2020, Gordon et al 2021, Müller 2022, see also  https://plato.stanford.edu/entries/ethics-ai/
Related categories

2063 found
Order:
1 — 50 / 2063
Material to categorize
  1. Hijacking Epistemic Agency - How Emerging Technologies Threaten our Wellbeing as Knowers.John Dorsch - 2022 - Proceedings of the 2022 Aaai/Acm Conference on Ai, Ethics, and Society 1.
    The aim of this project to expose the reasons behind the pandemic of misinformation (henceforth, PofM) by examining the enabling conditions of epistemic agency and the emerging technologies that threaten it. I plan to research the emotional origin of epistemic agency, i.e. on the origin of our capacity to acquire justification for belief, as well as on the significance this emotional origin has for our lives as epistemic agents in our so-called Misinformation Age. This project has three objectives. First, I (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2. The Shortcut - Why Intelligent Machines Do Not Think Like Us.Nello Cristianini - forthcoming - Boca Raton, Florida: CRC Press.
    Book. From the Publisher. An influential scientist in the field of artificial intelligence (AI) explains its fundamental concepts and how it is changing culture and society. -/- A particular form of AI is now embedded in our tech, our infrastructure, and our lives. How did it get there? Where and why should we be concerned? And what should we do now? The Shortcut: Why Intelligent Machines Do Not Think Like Us provides an accessible yet probing exposure of AI in its (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. Small Data for sustainability: AI ethics and the environment.Elisa Orrù - 2023 - Open Global Rights.
    Moving away from the currently prevalent Big Data mindset towards a Small Data approach would help improve the sustainability of AI systems and would additionally have positive implications for fairness, (global) justice, privacy, transparency, and accountability.
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  4. Beta-testing the ethics plugin.Keith Begley - 2023 - AI and Society:1-3.
    The three main kinds of theory in normative ethics, namely, consequentialism, deontology, and virtue ethics, are often presented as the ‘palette’ from which we may choose, or use as a starting point for an investigation. However, this way of doing ethics and philosophy, by the palette, may be leading some of us astray. It has led some to believe that all that there is to ethics, and to ethics of AI, is given in terms of these already devised petrified categories (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  5. Contextual Integrity as a General Conceptual Tool for Evaluating Technological Change.Elizabeth O’Neill - 2022 - Philosophy and Technology 35 (3):1-25.
    The fast pace of technological change necessitates new evaluative and deliberative tools. This article develops a general, functional approach to evaluating technological change, inspired by Nissenbaum’s theory of contextual integrity. Nissenbaum introduced the concept of contextual integrity to help analyze how technological changes can produce privacy problems. Reinterpreted, the concept of contextual integrity can aid our thinking about how technological changes affect the full range of human concerns and values—not only privacy. I propose a generalized concept of contextual integrity that (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
Algorithmic Fairness
  1. When Does Physician Use of AI Increase Liability?Kevin Tobia, Aileen Nielsen & Alexander Stremitzer - 2021 - Journal of Nuclear Medicine 62.
    An increasing number of automated and artificially intelligent (AI) systems make medical treatment recommendations, including “personalized” recommendations, which can deviate from standard care. Legal scholars argue that following such nonstandard treatment recommendations will increase liability in medical malpractice, undermining the use of potentially beneficial medical AI. However, such liability depends in part on lay judgments by jurors: When physicians use AI systems, in which circumstances would jurors hold physicians liable? To determine potential jurors’ judgments of liability, we conducted an online (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. Having Your Day in Robot Court.Benjamin Chen, Alexander Stremitzer & Kevin Tobia - 2023 - Harvard Journal of Law and Technology 36.
    Should machines be judges? Some say no, arguing that citizens would see robot-led legal proceedings as procedurally unfair because “having your day in court” is having another human adjudicate your claims. Prior research established that people obey the law in part because they see it as procedurally just. The introduction of artificially intelligent (AI) judges could therefore undermine sentiments of justice and legal compliance if citizens intuitively take machine-adjudicated proceedings to be less fair than the human-adjudicated status quo. Two original (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. Measurement invariance, selection invariance, and fair selection revisited.Remco Heesen & Jan-Willem Romeijn - forthcoming - Psychological Methods.
    This note contains a corrective and a generalization of results by Borsboom et al. (2008), based on Heesen and Romeijn (2019). It highlights the relevance of insights from psychometrics beyond the context of psychological testing.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  4. Three Lessons For and From Algorithmic Discrimination.Frej Klem Thomsen - forthcoming - Res Publica.
    Algorithmic discrimination has rapidly become a topic of intense public and academic interest. This article explores three issues raised by algorithmic discrimination: 1) the distinction between direct and indirect discrimination, 2) the notion of disadvantageous treatment, and 3) the moral badness of discriminatory automated decision-making. It argues that some conventional distinctions between direct and indirect discrimination appear not to apply to algorithmic discrimination, that algorithmic discrimination may often be discrimination between groups, as opposed to against groups, and that it is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. Algorithmic neutrality.Milo Phillips-Brown - manuscript
    Bias infects the algorithms that wield increasing control over our lives. Predictive policing systems overestimate crime in communities of color; hiring algorithms dock qualified female candidates; and facial recognition software struggles to recognize dark-skinned faces. Algorithmic bias has received significant attention. Algorithmic neutrality, in contrast, has been largely neglected. Algorithmic neutrality is my topic. I take up three questions. What is algorithmic neutrality? Is algorithmic neutrality possible? When we have an eye to algorithmic neutrality, what can we learn about algorithmic (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy.Atoosa Kasirzadeh - 2022 - Aies '22: Proceedings of the 2022 Aaai/Acm Conference on Ai, Ethics, and Society.
    Data-driven predictive algorithms are widely used to automate and guide high-stake decision making such as bail and parole recommendation, medical resource distribution, and mortgage allocation. Nevertheless, harmful outcomes biased against vulnerable groups have been reported. The growing research field known as 'algorithmic fairness' aims to mitigate these harmful biases. Its primary methodology consists in proposing mathematical metrics to address the social harms resulting from an algorithm's biased outputs. The metrics are typically motivated by -- or substantively rooted in -- ideals (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7. Ethical AI at work: the social contract for Artificial Intelligence and its implications for the workplace psychological contract.Sarah Bankins & Paul Formosa - 2021 - In Redefining the psychological contract in the digital era: issues for research and practice. Cham, Switzerland: pp. 55-72.
    Artificially intelligent (AI) technologies are increasingly being used in many workplaces. It is recognised that there are ethical dimensions to the ways in which organisations implement AI alongside, or substituting for, their human workforces. How will these technologically driven disruptions impact the employee–employer exchange? We provide one way to explore this question by drawing on scholarship linking Integrative Social Contracts Theory (ISCT) to the psychological contract (PC). Using ISCT, we show that the macrosocial contract’s ethical AI norms of beneficence, non-maleficence, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic bias can be handled (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9. Algorithmic Microaggressions.Emma McClure & Benjamin Wald - 2022 - Feminist Philosophy Quarterly 8 (3).
    We argue that machine learning algorithms can inflict microaggressions on members of marginalized groups and that recognizing these harms as instances of microaggressions is key to effectively addressing the problem. The concept of microaggression is also illuminated by being studied in algorithmic contexts. We contribute to the microaggression literature by expanding the category of environmental microaggressions and highlighting the unique issues of moral responsibility that arise when we focus on this category. We theorize two kinds of algorithmic microaggression, stereotyping and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. "fitting the description".Damien P. Williams - 2020 - Journal of Responsible Innovation 1 (7):74-83.
    It is increasingly evident that if researchers and policymakers want to meaningfully develop an understanding of responsible innovation, we must first ask whether some sociotechnical systems should be developed, at all. Here I argue that systems like facial recognition, predictive policing, and biometrics are predicated on myriad human prejudicial biases and assumptions which must be named and interrogated prior to any innovation. Further, the notions of individual responsibility inherent in discussions of technological ethics and fairness overburden marginalized peoples with a (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  11. Re-assessing Google as Epistemic Tool in the Age of Personalisation.Tanya de Villiers-Botha - 2022 - The Proceedings of SACAIR2022 Online Conference, the 3rd Southern African Conference for Artificial Intelligence Research.
    Google Search is arguably one of the primary epistemic tools in use today, with the lion’s share of the search-engine market globally. Scholarship on countering the current scourge of misinformation often recommends “digital lit- eracy” where internet users, especially those who get their information from so- cial media, are encouraged to fact-check such information using reputable sources. Given our current internet-based epistemic landscape, and Google’s dominance of the internet, it is very likely that such acts of epistemic hygiene will take (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. Algorithmic Indirect Discrimination, Fairness, and Harm.Frej Klem Thomsen - manuscript
    Over the past decade, scholars, institutions, and activists have voiced strong concerns about the potential of automated decision systems to indirectly discriminate against vulnerable groups. This article analyses the ethics of algorithmic indirect discrimination, and argues that we can explain what is morally bad about such discrimination by reference to the fact that it causes harm. The article first sketches certain elements of the technical and conceptual background, including definitions of direct and indirect algorithmic differential treatment. It next introduces three (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  13. Diachronic and synchronic variation in the performance of adaptive machine learning systems: the ethical challenges.Joshua Hatherley & Robert Sparrow - forthcoming - Journal of the American Medical Informatics Association.
    Objectives: Machine learning (ML) has the potential to facilitate “continual learning” in medicine, in which an ML system continues to evolve in response to exposure to new data over time, even after being deployed in a clinical setting. In this article, we provide a tutorial on the range of ethical issues raised by the use of such “adaptive” ML systems in medicine that have, thus far, been neglected in the literature. -/- Target audience: The target audiences for this tutorial are (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  14. The Oxford Handbook of Digital Ethics.Carissa Véliz (ed.) - 2021 - Oxford University Press.
    The Oxford Handbook of Digital Ethics is a lively and authoritative guide to ethical issues related to digital technologies, with a special emphasis on AI. Philosophers with a wide range of expertise cover thirty-seven topics: from the right to have access to internet, to trolling and online shaming, speech on social media, fake news, sex robots and dating online, persuasive technology, value alignment, algorithmic bias, predictive policing, price discrimination online, medical AI, privacy and surveillance, automating democracy, the future of work, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15. Ethical assurance: a practical approach to the responsible design, development, and deployment of data-driven technologies.Christopher Burr & David Leslie - forthcoming - AI and Ethics.
    This article offers several contributions to the interdisciplinary project of responsible research and innovation in data science and AI. First, it provides a critical analysis of current efforts to establish practical mechanisms for algorithmic auditing and assessment to identify limitations and gaps with these approaches. Second, it provides a brief introduction to the methodology of argument-based assurance and explores how it is currently being applied in the development of safety cases for autonomous and intelligent systems. Third, it generalises this method (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. Clinical Ethics – To Compute, or Not to Compute?Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (12):W1-W4.
    Can machine intelligence do clinical ethics? And if so, would applying it to actual medical cases be desirable? In a recent target article (Meier et al. 2022), we described the piloting of our advisory algorithm METHAD. Here, we reply to commentaries published in response to our project. The commentaries fall into two broad categories: concrete criticism that concerns the development of METHAD; and the more general question as to whether one should employ decision-support systems of this kind—the debate we set (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  17. A Fuzzy-Cognitive-Maps Approach to Decision-Making in Medical Ethics.Alice Hein, Lukas J. Meier, Alena Buyx & Klaus Diepold - 2022 - 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE).
    Although machine intelligence is increasingly employed in healthcare, the realm of decision-making in medical ethics remains largely unexplored from a technical perspective. We propose an approach based on fuzzy cognitive maps (FCMs), which builds on Beauchamp and Childress’ prima-facie principles. The FCM’s weights are optimized using a genetic algorithm to provide recommendations regarding the initiation, continuation, or withdrawal of medical treatment. The resulting model approximates the answers provided by our team of medical ethicists fairly well and offers a high degree (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  18. Autonomous Vehicles, Business Ethics, and Risk Distribution in Hybrid Traffic.Brian Berkey - 2022 - In Ryan Jenkins, David Cerny & Tomas Hribek (eds.), Autonomous Vehicle Ethics: The Trolley Problem and Beyond. New York, NY, USA: pp. 210-228.
    In this chapter, I argue that in addition to the generally accepted aim of reducing traffic-related injuries and deaths as much as possible, a principle of fairness in the distribution of risk should inform our thinking about how firms that produce autonomous vehicles ought to program them to respond in conflict situations involving human-driven vehicles. This principle, I claim, rules out programming autonomous vehicles to systematically prioritize the interests of their occupants over those of the occupants of other vehicles, including (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19. Assembled Bias: Beyond Transparent Algorithmic Bias.Robyn Repko Waller & Russell L. Waller - 2022 - Minds and Machines 32 (3):533-562.
    In this paper we make the case for the emergence of novel kind of bias with the use of algorithmic decision-making systems. We argue that the distinctive generative process of feature creation, characteristic of machine learning (ML), contorts feature parameters in ways that can lead to emerging feature spaces that encode novel algorithmic bias involving already marginalized groups. We term this bias _assembled bias._ Moreover, assembled biases are distinct from the much-discussed algorithmic bias, both in source (training data versus feature (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  20. Artificial Intelligence in a Structurally Unjust Society.Ting-An Lin & Po-Hsuan Cameron Chen - 2022 - Feminist Philosophy Quarterly 8 (3/4):Article 3.
    Increasing concerns have been raised regarding artificial intelligence (AI) bias, and in response, efforts have been made to pursue AI fairness. In this paper, we argue that the idea of structural injustice serves as a helpful framework for clarifying the ethical concerns surrounding AI bias—including the nature of its moral problem and the responsibility for addressing it—and reconceptualizing the approach to pursuing AI fairness. Using AI in healthcare as a case study, we argue that AI bias is a form of (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21. Algorithmic Political Bias Can Reduce Political Polarization.Uwe Peters - 2022 - Philosophy and Technology 35 (3):1-7.
    Does algorithmic political bias contribute to an entrenchment and polarization of political positions? Franke argues that it may do so because the bias involves classifications of people as liberals, conservatives, etc., and individuals often conform to the ways in which they are classified. I provide a novel example of this phenomenon in human–computer interactions and introduce a social psychological mechanism that has been overlooked in this context but should be experimentally explored. Furthermore, while Franke proposes that algorithmic political classifications entrench (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  22. Using (Un)Fair Algorithms in an Unjust World.Kasper Lippert-Rasmussen - forthcoming - Res Publica:1-20.
    Algorithm-assisted decision procedures—including some of the most high-profile ones, such as COMPAS—have been described as unfair because they compound injustice. The complaint is that in such procedures a decision disadvantaging members of a certain group is based on information reflecting the fact that the members of the group have already been unjustly disadvantaged. I assess this reasoning. First, I distinguish the anti-compounding duty from a related but distinct duty—the proportionality duty—from which at least some of the intuitive appeal of the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  23. Algorithmic Bias and Risk Assessments: Lessons from Practice.Ali Hasan, Shea Brown, Jovana Davidovic, Benjamin Lange & Mitt Regan - 2022 - Digital Society 1 (1):1-15.
    In this paper, we distinguish between different sorts of assessments of algorithmic systems, describe our process of assessing such systems for ethical risk, and share some key challenges and lessons for future algorithm assessments and audits. Given the distinctive nature and function of a third-party audit, and the uncertain and shifting regulatory landscape, we suggest that second-party assessments are currently the primary mechanisms for analyzing the social impacts of systems that incorporate artificial intelligence. We then discuss two kinds of as-sessments: (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24. Legitimacy and automated decisions: the moral limits of algocracy.Bartek Chomanski - 2022 - Ethics and Information Technology 24 (3):1-9.
    With the advent of automated decision-making, governments have increasingly begun to rely on artificially intelligent algorithms to inform policy decisions across a range of domains of government interest and influence. The practice has not gone unnoticed among philosophers, worried about “algocracy”, and its ethical and political impacts. One of the chief issues of ethical and political significance raised by algocratic governance, so the argument goes, is the lack of transparency of algorithms. One of the best-known examples of philosophical analyses of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  25. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems.Kathleen Creel & Deborah Hellman - 2022 - Canadian Journal of Philosophy 52 (1):26-43.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to explain (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Algorithmic Fairness and the Situated Dynamics of Justice.Sina Fazelpour, Zachary C. Lipton & David Danks - 2022 - Canadian Journal of Philosophy 52 (1):44-60.
    Machine learning algorithms are increasingly used to shape high-stake allocations, sparking research efforts to orient algorithm design towards ideals of justice and fairness. In this research on algorithmic fairness, normative theorizing has primarily focused on identification of “ideally fair” target states. In this paper, we argue that this preoccupation with target states in abstraction from the situated dynamics of deployment is misguided. We propose a framework that takes dynamic trajectories as direct objects of moral appraisal, highlighting three respects in which (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Social Media and its Negative Impacts on Autonomy.Siavosh Sahebi & Paul Formosa - 2022 - Philosophy and Technology 35 (3):1-24.
    How social media impacts the autonomy of its users is a topic of increasing focus. However, much of the literature that explores these impacts fails to engage in depth with the philosophical literature on autonomy. This has resulted in a failure to consider the full range of impacts that social media might have on autonomy. A deeper consideration of these impacts is thus needed, given the importance of both autonomy as a moral concept and social media as a feature of (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  28. Understanding user sensemaking in fairness and transparency in algorithms: algorithmic sensemaking in over-the-top platform.Donghee Shin, Joon Soo Lim, Norita Ahmad & Mohammed Ibahrine - forthcoming - AI and Society:1-14.
    A number of artificial intelligence systems have been proposed to assist users in identifying the issues of algorithmic fairness and transparency. These AI systems use diverse bias detection methods from various perspectives, including exploratory cues, interpretable tools, and revealing algorithms. This study explains the design of AI systems by probing how users make sense of fairness and transparency as they are hypothetical in nature, with no specific ways for evaluation. Focusing on individual perceptions of fairness and transparency, this study examines (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (7):4-20.
    Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report on the (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  30. Expert responsibility in AI development.Maria Hedlund & Erik Persson - 2022 - AI and Society:1-12.
    The purpose of this paper is to discuss the responsibility of AI experts for guiding the development of AI in a desirable direction. More specifically, the aim is to answer the following research question: To what extent are AI experts responsible in a forward-looking way for effects of AI technology that go beyond the immediate concerns of the programmer or designer? AI experts, in this paper conceptualised as experts regarding the technological aspects of AI, have knowledge and control of AI (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  31. From Explanation to Recommendation: Ethical Standards for Algorithmic Recourse.Emily Sullivan & Philippe Verreault-Julien - forthcoming - Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES’22).
    People are increasingly subject to algorithmic decisions, and it is generally agreed that end-users should be provided an explanation or rationale for these decisions. There are different purposes that explanations can have, such as increasing user trust in the system or allowing users to contest the decision. One specific purpose that is gaining more traction is algorithmic recourse. We first pro- pose that recourse should be viewed as a recommendation problem, not an explanation problem. Then, we argue that the capability (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  32. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts.Paul Formosa, Wendy Rogers, Yannick Griep, Sarah Bankins & Deborah Richards - 2022 - Computers in Human Behaviour 133.
    Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients’ perceptions of (un) dignified treatment. We explore this issue through an experimental vignette study comparing individuals’ perceptions of being (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context.Sarah Bankins, Paul Formosa, Yannick Griep & Deborah Richards - forthcoming - Information Systems Frontiers.
    Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker role appropriate- (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34. What lies behind AGI: ethical concerns related to LLMs.Giada Pistilli - 2022 - Éthique Et Numérique 1 (1):59-68.
    This paper opens the philosophical debate around the notion of Artificial General Intelligence (AGI) and its application in Large Language Models (LLMs). Through the lens of moral philosophy, the paper raises questions about these AI systems' capabilities and goals, the treatment of humans behind them, and the risk of perpetuating a monoculture through language.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35. When Gig Workers Become Essential: Leveraging Customer Moral Self-Awareness Beyond COVID-19.Julian Friedland - forthcoming - Business Horizons 66.
    The COVID-19 pandemic has intensified the extent to which economies in the developed and developing world rely on gig workers to perform essential tasks such as health care, personal transport, food and package delivery, and ad hoc tasking services. As a result, workers who provide such services are no longer perceived as mere low-skilled laborers, but as essential workers who fulfill a crucial role in society. The newly elevated moral and economic status of these workers increases consumer demand for corporate (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  36. Identity and the Limits of Fair Assessment.Rush T. Stewart - forthcoming - Journal of Theoretical Politics.
    In many assessment problems—aptitude testing, hiring decisions, appraisals of the risk of recidivism, evaluation of the credibility of testimonial sources, and so on—the fair treatment of different groups of individuals is an important goal. But individuals can be legitimately grouped in many different ways. Using a framework and fairness constraints explored in research on algorithmic fairness, I show that eliminating certain forms of bias across groups for one way of classifying individuals can make it impossible to eliminate such bias across (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. Algorithmic fairness through group parities? The case of COMPAS-SAPMOC.Francesca Lagioia, Riccardo Rovatti & Giovanni Sartor - forthcoming - AI and Society.
    Machine learning classifiers are increasingly used to inform, or even make, decisions significantly affecting human lives. Fairness concerns have spawned a number of contributions aimed at both identifying and addressing unfairness in algorithmic decision-making. This paper critically discusses the adoption of group-parity criteria (e.g., demographic parity, equality of opportunity, treatment equality) as fairness standards. To this end, we evaluate the use of machine learning methods relative to different steps of the decision-making process: assigning a predictive score, linking a classification to (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38. Algorithmic Fairness and Base Rate Tracking.Benjamin Eva - 2022 - Philosophy and Public Affairs 50 (2):239-266.
    Philosophy & Public Affairs, Volume 50, Issue 2, Page 239-266, Spring 2022.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  39. Algorithmic Political Bias in Artificial Intelligence Systems.Uwe Peters - 2022 - Philosophy and Technology 35 (2):1-23.
    Some artificial intelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political orientation can arise in (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  40. Machine See, Machine Do: How Technology Mirrors Bias in Our Criminal Justice System.Patrick K. Lin - 2021 - New Degree Press.
    “When today’s technology relies on yesterday’s data, it will simply mirror our past mistakes and biases.” -/- AI and other high-tech tools embed and reinforce America’s history of prejudice and exclusion — even when they are used with the best intentions. Patrick K. Lin’s Machine See, Machine Do: How Technology Mirrors Bias in Our Criminal Justice System takes a deep and thorough look into the use of technology in the criminal justice system, and investigates the instances of coded bias present (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41. The Limits of Reallocative and Algorithmic Policing.Luke William Hunt - 2022 - Criminal Justice Ethics 41 (1):1-24.
    Policing in many parts of the world—the United States in particular—has embraced an archetypal model: a conception of the police based on the tenets of individuated archetypes, such as the heroic police “warrior” or “guardian.” Such policing has in part motivated moves to (1) a reallocative model: reallocating societal resources such that the police are no longer needed in society (defunding and abolishing) because reform strategies cannot fix the way societal problems become manifest in (archetypal) policing; and (2) an algorithmic (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  42. The Use and Misuse of Counterfactuals in Ethical Machine Learning.Atoosa Kasirzadeh & Andrew Smart - 2021 - In ACM Conference on Fairness, Accountability, and Transparency (FAccT 21).
    The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine learning fairness and social explainability can (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Algorithmic Fairness in Mortgage Lending: From Absolute Conditions to Relational Trade-offs.Michelle Seng Ah Lee & Luciano Floridi - 2021 - In Josh Cowls & Jessica Morley (eds.), The 2020 Yearbook of the Digital Ethics Lab. Springer Verlag. pp. 145-171.
    To address the rising concern that algorithmic decision-making may reinforce discriminatory biases, researchers have proposed many notions of fairness and corresponding mathematical formalizations. Each of these notions is often presented as a one-size-fits-all, absolute condition; however, in reality, the practical and ethical trade-offs are unavoidable and more complex. We introduce a new approach that considers fairness—not as a binary, absolute mathematical condition—but rather, as a relational notion in comparison to alternative decision-making processes. Using U.S. mortgage lending as an example use (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. The Fairness in Algorithmic Fairness.Sune Holm - forthcoming - Res Publica:1-17.
    With the increasing use of algorithms in high-stakes areas such as criminal justice and health has come a significant concern about the fairness of prediction-based decision procedures. In this article I argue that a prominent class of mathematically incompatible performance parity criteria can all be understood as applications of John Broome’s account of fairness as the proportional satisfaction of claims. On this interpretation these criteria do not disagree on what it means for an algorithm to be fair. Rather they express (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  45. Disability, fairness, and algorithmic bias in AI recruitment.Nicholas Tilmes - 2022 - Ethics and Information Technology 24 (2).
    While rapid advances in artificial intelligence hiring tools promise to transform the workplace, these algorithms risk exacerbating existing biases against marginalized groups. In light of these ethical issues, AI vendors have sought to translate normative concepts such as fairness into measurable, mathematical criteria that can be optimized for. However, questions of disability and access often are omitted from these ongoing discussions about algorithmic bias. In this paper, I argue that the multiplicity of different kinds and intensities of people’s disabilities and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 2063