Results for ' Bias in AI'

991 found
Order:
  1.  55
    Black Boxes and Bias in AI Challenge Autonomy.Craig M. Klugman - 2021 - American Journal of Bioethics 21 (7):33-35.
    In “Artificial Intelligence, Social Media and Depression: A New Concept of Health-Related Digital Autonomy,” Laacke and colleagues posit a revised model of autonomy when using digital algori...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  2. Disability, fairness, and algorithmic bias in AI recruitment.Nicholas Tilmes - 2022 - Ethics and Information Technology 24 (2).
    While rapid advances in artificial intelligence hiring tools promise to transform the workplace, these algorithms risk exacerbating existing biases against marginalized groups. In light of these ethical issues, AI vendors have sought to translate normative concepts such as fairness into measurable, mathematical criteria that can be optimized for. However, questions of disability and access often are omitted from these ongoing discussions about algorithmic bias. In this paper, I argue that the multiplicity of different kinds and intensities of people’s disabilities (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  3. Apropos of "Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals".Ognjen Arandjelović - 2023 - AI and Ethics.
    The present comment concerns a recent AI & Ethics article which purports to report evidence of speciesist bias in various popular computer vision (CV) and natural language processing (NLP) machine learning models described in the literature. I examine the authors' analysis and show it, ironically, to be prejudicial, often being founded on poorly conceived assumptions and suffering from fallacious and insufficiently rigorous reasoning, its superficial appeal in large part relying on the sequacity of the article's target readership.
    Direct download  
     
    Export citation  
     
    Bookmark  
  4. The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI with Honneth’s Theory of Recognition.Rosalie Waelen & Michał Wieczorek - 2022 - Philosophy and Technology 35 (2).
    AI systems have often been found to contain gender biases. As a result of these gender biases, AI routinely fails to adequately recognize the needs, rights, and accomplishments of women. In this article, we use Axel Honneth’s theory of recognition to argue that AI’s gender biases are not only an ethical problem because they can lead to discrimination, but also because they resemble forms of misrecognition that can hurt women’s self-development and self-worth. Furthermore, we argue that Honneth’s theory of recognition (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  5.  25
    Bias in algorithms of AI systems developed for COVID-19: A scoping review.Janet Delgado, Alicia de Manuel, Iris Parra, Cristian Moyano, Jon Rueda, Ariel Guersenzvaig, Txetxu Ausin, Maite Cruz, David Casacuberta & Angel Puyol - 2022 - Journal of Bioethical Inquiry 19 (3):407-419.
    To analyze which ethically relevant biases have been identified by academic literature in artificial intelligence algorithms developed either for patient risk prediction and triage, or for contact tracing to deal with the COVID-19 pandemic. Additionally, to specifically investigate whether the role of social determinants of health have been considered in these AI developments or not. We conducted a scoping review of the literature, which covered publications from March 2020 to April 2021. ​Studies mentioning biases on AI algorithms developed for contact (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  6. Cultural Bias in Explainable AI Research.Uwe Peters & Mary Carman - forthcoming - Journal of Artificial Intelligence Research.
    For synergistic interactions between humans and artificial intelligence (AI) systems, AI outputs often need to be explainable to people. Explainable AI (XAI) systems are commonly tested in human user studies. However, whether XAI researchers consider potential cultural differences in human explanatory needs remains unexplored. We highlight psychological research that found significant differences in human explanations between many people from Western, commonly individualist countries and people from non-Western, often collectivist countries. We argue that XAI research currently overlooks these variations and that (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  7.  25
    Algorithmic bias in anthropomorphic artificial intelligence: Critical perspectives through the practice of women media artists and designers.Caterina Antonopoulou - 2023 - Technoetic Arts 21 (2):157-174.
    Current research in artificial intelligence (AI) sheds light on algorithmic bias embedded in AI systems. The underrepresentation of women in the AI design sector of the tech industry, as well as in training datasets, results in technological products that encode gender bias, reinforce stereotypes and reproduce normative notions of gender and femininity. Biased behaviour is notably reflected in anthropomorphic AI systems, such as personal intelligent assistants (PIAs) and chatbots, that are usually feminized through various design parameters, such as (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  17
    Policy advice and best practices on bias and fairness in AI.Jose M. Alvarez, Alejandra Bringas Colmenarejo, Alaa Elobaid, Simone Fabbrizzi, Miriam Fahimi, Antonio Ferrara, Siamak Ghodsi, Carlos Mougan, Ioanna Papageorgiou, Paula Reyero, Mayra Russo, Kristen M. Scott, Laura State, Xuan Zhao & Salvatore Ruggieri - 2024 - Ethics and Information Technology 26 (2):1-26.
    The literature addressing bias and fairness in AI models (fair-AI) is growing at a fast pace, making it difficult for novel researchers and practitioners to have a bird’s-eye view picture of the field. In particular, many policy initiatives, standards, and best practices in fair-AI have been proposed for setting principles, procedures, and knowledge bases to guide and operationalize the management of bias and fairness. The first objective of this paper is to concisely survey the state-of-the-art of fair-AI methods (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  34
    Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities.Sinead O’Connor & Helen Liu - forthcoming - AI and Society:1-13.
    Across the world, artificial intelligence (AI) technologies are being more widely employed in public sector decision-making and processes as a supposedly neutral and an efficient method for optimizing delivery of services. However, the deployment of these technologies has also prompted investigation into the potentially unanticipated consequences of their introduction, to both positive and negative ends. This paper chooses to focus specifically on the relationship between gender bias and AI, exploring claims of the neutrality of such technologies and how its (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10.  11
    Toleration and Justice in the Laozi: Engaging with Tao Jiang's Origins of Moral-Political Philosophy in Early China.Ai Yuan - 2023 - Philosophy East and West 73 (2):466-475.
    In lieu of an abstract, here is a brief excerpt of the content:Toleration and Justice in the Laozi:Engaging with Tao Jiang's Origins of Moral-Political Philosophy in Early ChinaAi Yuan (bio)IntroductionThis review article engages with Tao Jiang's ground-breaking monograph on the Origins of Moral-Political Philosophy in Early China with particular focus on the articulation of toleration and justice in the Laozi (otherwise called the Daodejing).1 Jiang discusses a naturalistic turn and the re-alignment of values in the Laozi, resulting in a naturalization (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11. Ethical assessments and mitigation strategies for biases in AI-systems used during the COVID-19 pandemic.Alicia De Manuel, Janet Delgado, Parra Jonou Iris, Txetxu Ausín, David Casacuberta, Maite Cruz Piqueras, Ariel Guersenzvaig, Cristian Moyano, David Rodríguez-Arias, Jon Rueda & Angel Puyol - 2023 - Big Data and Society 10 (1).
    The main aim of this article is to reflect on the impact of biases related to artificial intelligence (AI) systems developed to tackle issues arising from the COVID-19 pandemic, with special focus on those developed for triage and risk prediction. A secondary aim is to review assessment tools that have been developed to prevent biases in AI systems. In addition, we provide a conceptual clarification for some terms related to biases in this particular context. We focus mainly on nonracial biases (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  73
    Conservative AI and social inequality: conceptualizing alternatives to bias through social theory.Mike Zajko - 2021 - AI and Society 36 (3):1047-1056.
    In response to calls for greater interdisciplinary involvement from the social sciences and humanities in the development, governance, and study of artificial intelligence systems, this paper presents one sociologist’s view on the problem of algorithmic bias and the reproduction of societal bias. Discussions of bias in AI cover much of the same conceptual terrain that sociologists studying inequality have long understood using more specific terms and theories. Concerns over reproducing societal bias should be informed by an (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  13. Algorithms are not neutral: Bias in collaborative filtering.Catherine Stinson - 2022 - AI and Ethics 2 (4):763-770.
    When Artificial Intelligence (AI) is applied in decision-making that affects people’s lives, it is now well established that the outcomes can be biased or discriminatory. The question of whether algorithms themselves can be among the sources of bias has been the subject of recent debate among Artificial Intelligence researchers, and scholars who study the social impact of technology. There has been a tendency to focus on examples, where the data set used to train the AI is biased, and denial (...)
     
    Export citation  
     
    Bookmark   1 citation  
  14.  22
    Addressing bias in artificial intelligence for public health surveillance.Lidia Flores, Seungjun Kim & Sean D. Young - 2024 - Journal of Medical Ethics 50 (3):190-194.
    Components of artificial intelligence (AI) for analysing social big data, such as natural language processing (NLP) algorithms, have improved the timeliness and robustness of health data. NLP techniques have been implemented to analyse large volumes of text from social media platforms to gain insights on disease symptoms, understand barriers to care and predict disease outbreaks. However, AI-based decisions may contain biases that could misrepresent populations, skew results or lead to errors. Bias, within the scope of this paper, is described (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  15.  17
    Navigating AI-Enabled Modalities of Representation and Materialization in Architecture: Visual Tropes, Verbal Biases, and Geo-Specificity.Asma Mehan & Sina Mostafavi - 2023 - Plan Journal 8 (2):1-16.
    This research delves into the potential of implementing artificial intelligence in architecture. It specifically provides a critical assessment of AI-enabled workflows, encompassing creative ideation, representation, materiality, and critical thinking, facilitated by prompt-based generative processes. In this context, the paper provides an examination of the concept of hybrid human–machine intelligence. In an era characterized by pervasive data bias and engineered injustices, the concept of hybrid intelligence emerges as a critical tool, enabling the transcendence of preconceived stereotypes, clichés, and linguistic prejudices. (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  16. Algorithmic Political Bias in Artificial Intelligence Systems.Uwe Peters - 2022 - Philosophy and Technology 35 (2):1-23.
    Some artificial intelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  17. Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms.Benedetta Giovanola & Simona Tiribelli - 2023 - AI and Society 38 (2):549-563.
    The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  18.  52
    Mitigating Racial Bias in Machine Learning.Kristin M. Kostick-Quenet, I. Glenn Cohen, Sara Gerke, Bernard Lo, James Antaki, Faezah Movahedi, Hasna Njah, Lauren Schoen, Jerry E. Estep & J. S. Blumenthal-Barby - 2022 - Journal of Law, Medicine and Ethics 50 (1):92-100.
    When applied in the health sector, AI-based applications raise not only ethical but legal and safety concerns, where algorithms trained on data from majority populations can generate less accurate or reliable results for minorities and other disadvantaged groups.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  19. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  20.  54
    Bias and Epistemic Injustice in Conversational AI.Sebastian Laacke - 2023 - American Journal of Bioethics 23 (5):46-48.
    According to Russell and Norvig’s (2009) classification, Artificial Intelligence (AI) is the field that aims at building systems which either think rationally, act rationally, think like humans, or...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  21.  33
    Equal accuracy for Andrew and Abubakar—detecting and mitigating bias in name-ethnicity classification algorithms.Lena Hafner, Theodor Peter Peifer & Franziska Sofia Hafner - forthcoming - AI and Society:1-25.
    Uncovering the world’s ethnic inequalities is hampered by a lack of ethnicity-annotated datasets. Name-ethnicity classifiers (NECs) can help, as they are able to infer people’s ethnicities from their names. However, since the latest generation of NECs rely on machine learning and artificial intelligence (AI), they may suffer from the same racist and sexist biases found in many AIs. Therefore, this paper offers an algorithmic fairness audit of three NECs. It finds that the UK-Census-trained EthnicityEstimator displays large accuracy biases with regards (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22.  64
    Practical, epistemic and normative implications of algorithmic bias in healthcare artificial intelligence: a qualitative study of multidisciplinary expert perspectives.Yves Saint James Aquino, Stacy M. Carter, Nehmat Houssami, Annette Braunack-Mayer, Khin Than Win, Chris Degeling, Lei Wang & Wendy A. Rogers - forthcoming - Journal of Medical Ethics.
    Background There is a growing concern about artificial intelligence (AI) applications in healthcare that can disadvantage already under-represented and marginalised groups (eg, based on gender or race). Objectives Our objectives are to canvas the range of strategies stakeholders endorse in attempting to mitigate algorithmic bias, and to consider the ethical question of responsibility for algorithmic bias. Methodology The study involves in-depth, semistructured interviews with healthcare workers, screening programme managers, consumer health representatives, regulators, data scientists and developers. Results Findings (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23.  29
    AI ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies.Justyna Stypinska - 2023 - AI and Society 38 (2):665-677.
    In the last few years, we have witnessed a surge in scholarly interest and scientific evidence of how algorithms can produce discriminatory outcomes, especially with regard to gender and race. However, the analysis of fairness and bias in AI, important for the debate of AI for social good, has paid insufficient attention to the category of age and older people. Ageing populations have been largely neglected during the turn to digitality and AI. In this article, the concept of AI (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  24. Engineering Equity: How AI Can Help Reduce the Harm of Implicit Bias.Ying-Tung Lin, Tzu-Wei Hung & Linus Ta-Lun Huang - 2020 - Philosophy and Technology 34 (S1):65-90.
    This paper focuses on the potential of “equitech”—AI technology that improves equity. Recently, interventions have been developed to reduce the harm of implicit bias, the automatic form of stereotype or prejudice that contributes to injustice. However, these interventions—some of which are assisted by AI-related technology—have significant limitations, including unintended negative consequences and general inefficacy. To overcome these limitations, we propose a two-dimensional framework to assess current AI-assisted interventions and explore promising new ones. We begin by using the case of (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  25. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic bias can be (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  26.  40
    Integrating AI ethics in wildlife conservation AI systems in South Africa: a review, challenges, and future research agenda.Irene Nandutu, Marcellin Atemkeng & Patrice Okouma - 2023 - AI and Society 38 (1):245-257.
    With the increased use of Artificial Intelligence (AI) in wildlife conservation, issues around whether AI-based monitoring tools in wildlife conservation comply with standards regarding AI Ethics are on the rise. This review aims to summarise current debates and identify gaps as well as suggest future research by investigating (1) current AI Ethics and AI Ethics issues in wildlife conservation, (2) Initiatives Stakeholders in AI for wildlife conservation should consider integrating AI Ethics in wildlife conservation. We find that the existing literature (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  27.  23
    Elephant motorbikes and too many neckties: epistemic spatialization as a framework for investigating patterns of bias in convolutional neural networks.Raymond Drainville & Farida Vis - forthcoming - AI and Society:1-15.
    This article presents Epistemic Spatialization as a new framework for investigating the interconnected patterns of biases when identifying objects with convolutional neural networks. It draws upon Foucault’s notion of spatialized knowledge to guide its method of enquiry. We argue that decisions involved in the creation of algorithms, alongside the labeling, ordering, presentation, and commercial prioritization of objects, together create a distorted “nomination of the visible”: they harden the visibility of some objects, make other objects excessively visible, and consign yet others (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28. Feminist AI: Can We Expect Our AI Systems to Become Feminist?Galit Wellner & Tiran Rothman - 2020 - Philosophy and Technology 33 (2):191-205.
    The rise of AI-based systems has been accompanied by the belief that these systems are impartial and do not suffer from the biases that humans and older technologies express. It becomes evident, however, that gender and racial biases exist in some AI algorithms. The question is where the bias is rooted—in the training dataset or in the algorithm? Is it a linguistic issue or a broader sociological current? Works in feminist philosophy of technology and behavioral economics reveal the gender (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  29.  26
    The practical ethics of bias reduction in machine translation: why domain adaptation is better than data debiasing.Marcus Tomalin, Bill Byrne, Shauna Concannon, Danielle Saunders & Stefanie Ullmann - 2021 - Ethics and Information Technology 23 (3):419-433.
    This article probes the practical ethical implications of AI system design by reconsidering the important topic of bias in the datasets used to train autonomous intelligent systems. The discussion draws on recent work concerning behaviour-guiding technologies, and it adopts a cautious form of technological utopianism by assuming it is potentially beneficial for society at large if AI systems are designed to be comparatively free from the biases that characterise human behaviour. However, the argument presented here critiques the common well-intentioned (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30. The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-making Systems.Atoosa Kasirzadeh & Colin Klein - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES '21).
    Computers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  31. Turning queries into questions: For a plurality of perspectives in the age of AI and other frameworks with limited (mind)sets.Claudia Westermann & Tanu Gupta - 2023 - Technoetic Arts 21 (1):3-13.
    The editorial introduces issue 21.1 of Technoetic Arts via a critical reflection on the artificial intelligence hype (AI hype) that emerged in 2022. Tracing the history of the critique of Large Language Models, the editorial underscores that there are substantial ethical challenges related to bias in the training data, copyright issues, as well as ecological challenges which the technology industry has consistently downplayed over the years. -/- The editorial highlights the distinction between the current AI technology’s reliance on extensive (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  32.  25
    Ethical and legal challenges of AI in marketing: an exploration of solutions.Dinesh Kumar & Nidhi Suthar - forthcoming - Journal of Information, Communication and Ethics in Society.
    Purpose Artificial intelligence (AI) has sparked interest in various areas, including marketing. However, this exhilaration is being tempered by growing concerns about the moral and legal implications of using AI in marketing. Although previous research has revealed various ethical and legal issues, such as algorithmic discrimination and data privacy, there are no definitive answers. This paper aims to fill this gap by investigating AI’s ethical and legal concerns in marketing and suggesting feasible solutions. Design/methodology/approach The paper synthesises information from academic (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33. AI and Structural Injustice: Foundations for Equity, Values, and Responsibility.Johannes Himmelreich & Désirée Lim - 2023 - In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young & Baobao Zhang (eds.), The Oxford Handbook of AI Governance. Oxford University Press.
    This chapter argues for a structural injustice approach to the governance of AI. Structural injustice has an analytical and an evaluative component. The analytical component consists of structural explanations that are well-known in the social sciences. The evaluative component is a theory of justice. Structural injustice is a powerful conceptual tool that allows researchers and practitioners to identify, articulate, and perhaps even anticipate, AI biases. The chapter begins with an example of racial bias in AI that arises from structural (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic bias are (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  35.  93
    AI-Assisted Decision-making in Healthcare: The Application of an Ethics Framework for Big Data in Health and Research.Tamra Lysaght, Hannah Yeefen Lim, Vicki Xafis & Kee Yuan Ngiam - 2019 - Asian Bioethics Review 11 (3):299-314.
    Artificial intelligence is set to transform healthcare. Key ethical issues to emerge with this transformation encompass the accountability and transparency of the decisions made by AI-based systems, the potential for group harms arising from algorithmic bias and the professional roles and integrity of clinicians. These concerns must be balanced against the imperatives of generating public benefit with more efficient healthcare systems from the vastly higher and accurate computational power of AI. In weighing up these issues, this paper applies the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  36. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts.Paul Formosa, Wendy Rogers, Yannick Griep, Sarah Bankins & Deborah Richards - 2022 - Computers in Human Behaviour 133.
    Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients’ perceptions of (un) dignified treatment. We explore this issue through an experimental vignette study comparing individuals’ perceptions of being (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37.  3
    Zašto AI-umjetnost nije umjetnost – heideggerijanska kritika.Karl Kraatz & Shi-Ting Xie - 2023 - Synthesis Philosophica 38 (2):235-253.
    AI’s new ability to create artworks is seen as a major challenge to today’s understanding of art. There is a strong tension between people who predict that AI will replace artists and critics who claim that AI art will never be art. Furthermore, recent studies have documented a negative bias towards AI art. This paper provides a philosophical explanation for this negative bias, based on our shared understanding of the ontological differences between objects. We argue that our perception (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38.  12
    Equity in AgeTech for Ageing Well in Technology-Driven Places: The Role of Social Determinants in Designing AI-based Assistive Technologies.Giovanni Rubeis, Mei Lan Fang & Andrew Sixsmith - 2022 - Science and Engineering Ethics 28 (6):1-15.
    AgeTech involves the use of emerging technologies to support the health, well-being and independent living of older adults. In this paper we focus on how AgeTech based on artificial intelligence (AI) may better support older adults to remain in their own living environment for longer, provide social connectedness, support wellbeing and mental health, and enable social participation. In order to assess and better understand the positive as well as negative outcomes of AI-based AgeTech, a critical analysis of ethical design, digital (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  39.  14
    Toward children-centric AI: a case for a growth model in children-AI interactions.Karolina La Fors - forthcoming - AI and Society:1-13.
    This article advocates for a hermeneutic model for children-AI interactions in which the desirable purpose of children’s interaction with artificial intelligence systems is children's growth. The article perceives AI systems with machine-learning components as having a recursive element when interacting with children. They can learn from an encounter with children and incorporate data from interaction, not only from prior programming. Given the purpose of growth and this recursive element of AI, the article argues for distinguishing the interpretation of bias (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  40. AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind.Jocelyn Maclure - 2021 - Minds and Machines 31 (3):421-438.
    Machine learning-based AI algorithms lack transparency. In this article, I offer an interpretation of AI’s explainability problem and highlight its ethical saliency. I try to make the case for the legal enforcement of a strong explainability requirement: human organizations which decide to automate decision-making should be legally obliged to demonstrate the capacity to explain and justify the algorithmic decisions that have an impact on the wellbeing, rights, and opportunities of those affected by the decisions. This legal duty can be derived (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  41. A Tale of Two Deficits: Causality and Care in Medical AI.Melvin Chen - 2020 - Philosophy and Technology 33 (2):245-267.
    In this paper, two central questions will be addressed: ought we to implement medical AI technology in the medical domain? If yes, how ought we to implement this technology? I will critically engage with three options that exist with respect to these central questions: the Neo-Luddite option, the Assistive option, and the Substitutive option. I will first address key objections on behalf of the Neo-Luddite option: the Objection from Bias, the Objection from Artificial Autonomy, the Objection from Status Quo, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42. The promise and perils of AI in medicine.Robert Sparrow & Joshua James Hatherley - 2019 - International Journal of Chinese and Comparative Philosophy of Medicine 17 (2):79-109.
    What does Artificial Intelligence (AI) have to contribute to health care? And what should we be looking out for if we are worried about its risks? In this paper we offer a survey, and initial evaluation, of hopes and fears about the applications of artificial intelligence in medicine. AI clearly has enormous potential as a research tool, in genomics and public health especially, as well as a diagnostic aid. It’s also highly likely to impact on the organisational and business practices (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  43.  76
    Machine learning’s limitations in avoiding automation of bias.Daniel Varona, Yadira Lizama-Mue & Juan Luis Suárez - 2021 - AI and Society 36 (1):197-203.
    The use of predictive systems has become wider with the development of related computational methods, and the evolution of the sciences in which these methods are applied Solon and Selbst and Pedreschi et al.. The referred methods include machine learning techniques, face and/or voice recognition, temperature mapping, and other, within the artificial intelligence domain. These techniques are being applied to solve problems in socially and politically sensitive areas such as crime prevention and justice management, crowd management, and emotion analysis, just (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  44.  25
    Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings.Georgina Curto, Mario Fernando Jojoa Acosta, Flavio Comim & Begoña Garcia-Zapirain - forthcoming - AI and Society:1-16.
    Among the myriad of technical approaches and abstract guidelines proposed to the topic of AI bias, there has been an urgent call to translate the principle of fairness into the operational AI reality with the involvement of social sciences specialists to analyse the context of specific types of bias, since there is not a generalizable solution. This article offers an interdisciplinary contribution to the topic of AI and societal bias, in particular against the poor, providing a conceptual (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Automation Bias and Procedural Fairness: A Short Guide for the UK Civil Service.John Zerilli, Iñaki Goñi & Matilde Masetti Placci - forthcoming - Braid Reports.
    The use of advanced AI and data-driven automation in the public sector poses several organisational, practical, and ethical challenges. One that is easy to underestimate is automation bias, which, in turn, has underappreciated legal consequences. Automation bias is an attitude in which the operator of an autonomous system will defer to its outputs to the point where the operator overlooks or ignores evidence that the system is failing. The legal problem arises when statutory office-holders (or their employees) either (...)
     
    Export citation  
     
    Bookmark  
  46.  86
    The Ethics of Medical AI and the Physician-Patient Relationship.Sally Dalton-Brown - 2020 - Cambridge Quarterly of Healthcare Ethics 29 (1):115-121.
    :This article considers recent ethical topics relating to medical AI. After a general discussion of recent medical AI innovations, and a more analytic look at related ethical issues such as data privacy, physician dependency on poorly understood AI helpware, bias in data used to create algorithms post-GDPR, and changes to the patient–physician relationship, the article examines the issue of so-called robot doctors. Whereas the so-called democratization of healthcare due to health wearables and increased access to medical information might suggest (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  47.  20
    The selective deployment of AI in healthcare.Robert Vandersluis & Julian Savulescu - 2024 - Bioethics 38 (5):391-400.
    Machine‐learning algorithms have the potential to revolutionise diagnostic and prognostic tasks in health care, yet algorithmic performance levels can be materially worse for subgroups that have been underrepresented in algorithmic training data. Given this epistemic deficit, the inclusion of underrepresented groups in algorithmic processes can result in harm. Yet delaying the deployment of algorithmic systems until more equitable results can be achieved would avoidably and foreseeably lead to a significant number of unnecessary deaths in well‐represented populations. Faced with this dilemma (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48.  41
    Bosses without a heart: socio-demographic and cross-cultural determinants of attitude toward Emotional AI in the workplace.Peter Mantello, Manh-Tung Ho, Minh-Hoang Nguyen & Quan-Hoang Vuong - 2023 - AI and Society 38 (1):97-119.
    Biometric technologies are becoming more pervasive in the workplace, augmenting managerial processes such as hiring, monitoring and terminating employees. Until recently, these devices consisted mainly of GPS tools that track location, software that scrutinizes browser activity and keyboard strokes, and heat/motion sensors that monitor workstation presence. Today, however, a new generation of biometric devices has emerged that can sense, read, monitor and evaluate the affective state of a worker. More popularly known by its commercial moniker, Emotional AI, the technology stems (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  49.  97
    Why Moral Agreement is Not Enough to Address Algorithmic Structural Bias.P. Benton - 2022 - Communications in Computer and Information Science 1551:323-334.
    One of the predominant debates in AI Ethics is the worry and necessity to create fair, transparent and accountable algorithms that do not perpetuate current social inequities. I offer a critical analysis of Reuben Binns’s argument in which he suggests using public reason to address the potential bias of the outcomes of machine learning algorithms. In contrast to him, I argue that ultimately what is needed is not public reason per se, but an audit of the implicit moral assumptions (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5):medethics - 2020-106820.
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   46 citations  
1 — 50 / 991