About this topic
Summary Ethical issues associated with AI are proliferating and rising to popular attention as machines engineered to perform tasks traditionally requiring biological intelligence become ubiquitous. Consider that civil infrastructure including energy grids and mass-transit systems are increasingly moderated by increasingly intelligent machines. Ethical issues include those of responsibility and/or blameworthiness of such systems, with implications for engineers who must responsibly design them, and philosophers who must interpret impacts - both potential and actual - in order to advise ethical designers. For example, who or what is responsible in the case of an accident due to an AI system error, or due to design flaws, or due to proper operation outside of anticipated constraints, such as part of a semi-autonomous automobile or actuarial algorithm? These are issues falling under the heading of Ethics of AI, as well as to other categories, e.g. those dedicated to autonomous vehicles, algorithmic fairness or artificial system safety. Finally, as AIs become increasingly intelligent, there seems some legitimate concern over the potential for AIs to manage human systems according to AI values, rather than as directly programmed by human designers. These concerns call into question the long-term safety of intelligent systems, not only for individual human beings, but for the human race and life on Earth as a whole. These issues and many others are central to Ethics of AI, and works focusing on such ideas can be found here. 
Key works Some works: Bostrom manuscriptMüller 2014, Müller 2016, Etzioni & Etzioni 2017, Dubber et al 2020, Tasioulas 2019, Müller 2021
Introductions Müller 2013, Gunkel 2012, Coeckelbergh 2020, Gordon et al 2021, Müller 2022Jecker & Nakazawa 2022, Mao & Shi-Kupfer 2023, Dietrich et al 2021, see also  https://plato.stanford.edu/entries/ethics-ai/
Related

Contents
3052 found
Order:
1 — 50 / 3052
Material to categorize
  1. AI Mimicry and Human Dignity: Chatbot Use as a Violation of Self-Respect.Jan-Willem van der Rijt, Dimitri Coelho Mollo & Bram Vaassen - manuscript
    This paper investigates how human interactions with AI-powered chatbots may offend human dignity. Current chatbots, driven by large language models (LLMs), mimic human linguistic behaviour but lack the moral and rational capacities essential for genuine interpersonal respect. Human beings are prone to anthropomorphise chatbots—indeed, chatbots appear to be deliberately designed to elicit that response. As a result, human beings’ behaviour toward chatbots often resembles behaviours typical of interaction between moral agents. Drawing on a second-personal, relational account of dignity, we argue (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. Construct Validity in Automated Counterterrorism Analysis.Adrian K. Yee - 2025 - Philosophy of Science 92 (1):1-18.
    Governments and social scientists are increasingly developing machine learning methods to automate the process of identifying terrorists in real time and predict future attacks. However, current operationalizations of “terrorist”’ in artificial intelligence are difficult to justify given three issues that remain neglected: insufficient construct legitimacy, insufficient criterion validity, and insufficient construct validity. I conclude that machine learning methods should be at most used for the identification of singular individuals deemed terrorists and not for identifying possible terrorists from some more general (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3. "Responsibility" Plus "Gap" Equals "Problem".Marc Champagne - 2025 - In Johanna Seibt, Peter Fazekas & Oliver Santiago Quick (eds.), Social Robots with AI: Prospects, Risks, and Responsible Methods. Amsterdam: IOS Press. pp. 244–252.
    Peter Königs recently argued that, while autonomous robots generate responsibility gaps, such gaps need not be considered problematic. I argue that Königs’ compromise dissolves under analysis since, on a proper understanding of what “responsibility” is and what “gap” (metaphorically) means, their joint endorsement must repel an attitude of indifference. So, just as “calamities that happen but don’t bother anyone” makes no sense, the idea of “responsibility gaps that exist but leave citizens and ethicists unmoved” makes no sense.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. Social Robots with AI: Prospects, Risks, and Responsible Methods.Johanna Seibt, Peter Fazekas & Oliver Santiago Quick (eds.) - 2025 - Amsterdam: IOS Press.
  5. Explicability as an AI Principle: Technology and Ethics in Cooperation.Moto Kamiura - forthcoming - Proceedings of the 39Th Annual Conference of the Japanese Society for Artificial Intelligence, 2025.
    This paper categorizes current approaches to AI ethics into four perspectives and briefly summarizes them: (1) Case studies and technical trend surveys, (2) AI governance, (3) Technologies for AI alignment, (4) Philosophy. In the second half, we focus on the fourth perspective, the philosophical approach, within the context of applied ethics. In particular, the explicability of AI may be an area in which scientists, engineers, and AI developers are expected to engage more actively relative to other ethical issues in AI.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. Explainability Is Necessary for AI’s Trustworthiness.Ning Fan - 2025 - Philosophy and Technology 38 (1):1-5.
    In a recent article in this journal, Baron (2025) argues that we can appropriately trust unexplainable artificial intelligence (AI) systems, so explainability is not necessary for AI’s trustworthiness. In this commentary, I argue that Baron is wrong. I first offer a positive argument for the claim that explainability is necessary for trustworthiness. Drawing on this argument, I then show that Baron’s argument for thinking otherwise fails.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  7. Beauty Filters in Self-Perception: The Distorted Mirror Gazing Hypothesis.Gloria Andrada - 2025 - Topoi:1-12.
    Beauty filters are automated photo editing tools that use artificial intelligence and computer vision to detect facial features and modify them, allegedly improving a face’s physical appearance and attractiveness. Widespread use of these filters has raised concern due to their potentially damaging psychological effects. In this paper, I offer an account that examines the effect that interacting with such filters has on self-perception. I argue that when looking at digitally-beautified versions of themselves, individuals are looking at AI-curated distorted mirrors. This (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  8. Preservation or Transformation: A Daoist Guide to Griefbots.Pengbo Liu - forthcoming - In Henry Shevlin (ed.), AI in Society: Relationships (Oxford Intersections). Oxford University Press.
    Griefbots are chatbots modeled on the personalities of deceased individuals, designed to assist with the grieving process and, according to some, to continue relationships with loved ones after their physical passing. The essay examines the promises and perils of griefbots from a Daoist perspective. According to the Daoist philosopher Zhuangzi, death is a natural and inevitable phenomenon, a manifestation of the constant changes and transformations in the world. This approach emphasizes adaptability, flexibility, and openness to alternative ways of relating to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Artificially sentient beings: Moral, political, and legal issues.Fırat Akova - 2023 - New Techno-Humanities 3 (1):41-48.
    The emergence of artificially sentient beings raises moral, political, and legal issues that deserve scrutiny. First, it may be difficult to understand the well-being elements of artificially sentient beings and theories of well-being may have to be reconsidered. For instance, as a theory of well-being, hedonism may need to expand the meaning of happiness and suffering or it may run the risk of being irrelevant. Second, we may have to compare the claims of artificially sentient beings with the claims of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. A Roadmap for Governing AI: Technology Governance and Power-Sharing Liberalism.Danielle Allen, Woojin Lim, Sarah Hubbard, Allison Stanger, Shlomit Wagman, Kinney Zalesne & Omoaholo Omoakhalen - 2025 - AI and Ethics 4 (4).
    This paper aims to provide a roadmap for governing AI. In contrast to the reigning paradigms, we argue that AI governance should be not merely a reactive, punitive, status-quo-defending enterprise, but rather the expression of an expansive, proactive vision for technology—to advance human flourishing. Advancing human flourishing in turn requires democratic/political stability and economic empowerment. To accomplish this, we build on a new normative framework that will give humanity its best chance to reap the full benefits, while avoiding the dangers, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11. Distribution, Recognition, and Just Medical AI.Zachary Daus - 2025 - Philosophy and Technology 38 (1):1-17.
    Medical artificial intelligence (AI) systems are value-laden technologies that can simultaneously encourage and discourage conflicting values that may all be relevant for the pursuit of justice. I argue that the predominant theory of healthcare justice, the Rawls-inspired approach of Norman Daniels, neither adequately acknowledges such conflicts nor explains if and how they can resolved. By juxtaposing Daniels’s theory of healthcare justice with Axel Honneth’s and Nancy Fraser’s respective theories of justice, I draw attention to one such conflict. Medical AI may (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  12. Deception and manipulation in generative AI.Christian Tarsney - forthcoming - Philosophical Studies.
    Large language models now possess human-level linguistic abilities in many contexts. This raises the concern that they can be used to deceive and manipulate on unprecedented scales, for instance spreading political misinformation on social media. In future, agentic AI systems might also deceive and manipulate humans for their own purposes. In this paper, first, I argue that AI-generated content should be subject to stricter standards against deception and manipulation than we ordinarily apply to humans. Second, I offer new characterizations of (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  13. Technology, Liberty, and Guardrails.Kevin Mills - 2024 - AI and Ethics 5 (1):1-8.
    Technology companies are increasingly being asked to take responsibility for the technologies they create. Many of them are rising to the challenge. One way they do this is by implementing “guardrails”: restrictions on functionality that prevent people from misusing their technologies (per some standard of misuse). While there can be excellent reasons for implementing guardrails (and doing so is sometimes morally obligatory), I argue that the unrestricted authority to implement guardrails is incompatible with proper respect for user freedom, and is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  14. Health AI Poses Distinct Harms and Potential Benefits for Disabled People.Charles Binkley, Joel Michael Reynolds & Andrew Schuman - 2025 - Nature Medicine 1.
    This piece in Nature Medicine notes the risks that incorporation of AI systems into health care poses to disabled patients and proposes ways to avoid them and instead create benefit.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15. 50 preguntas sobre tecnologías para un envejecimiento activo y saludable. Edición española.Francisco Florez-Revuelta, Alin Ake-Kob, Pau Climent-Perez, Paulo Coelho, Liane Colonna, Laila Dahabiyeh, Carina Dantas, Esra Dogru-Huzmeli, Hazım Kemal Ekenel, Aleksandar Jevremovic, Nina Hosseini-Kivanani, Aysegul Ilgaz, Mladjan Jovanovic, Andrzej Klimczuk, Maksymilian M. Kuźmicz, Petre Lameski, Ferlanda Luna, Natália Machado, Tamara Mujirishvili, Zada Pajalic, Galidiya Petrova, Nathalie G. S. Puaschitz, Maria Jose Santofimia, Agusti Solanas, Wilhelmina van Staalduinen & Ziya Ata Yazici - 2024 - Alicante: University of Alicante.
    Este manual sobre tecnologías para un envejecimiento activo y saludable, también conocido como Vida Asistida Activa (Active Assisted Living – AAL en sus siglas en inglés), ha sido creado como parte de la Acción COST GoodBrother, que se ha llevado a cabo desde 2020 hasta 2024. Las Acciones COST son programas de investigación europeos que promueven la colaboración internacional, uniendo a investigadores, profesionales e instituciones para abordar desafíos sociales importantes. GoodBrother se ha centrado en las cuestiones éticas y de privacidad (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16. A Capability Approach to AI Ethics.Emanuele Ratti & Mark Graves - 2025 - American Philosophical Quarterly 62 (1):1-16.
    We propose a conceptualization and implementation of AI ethics via the capability approach. We aim to show that conceptualizing AI ethics through the capability approach has two main advantages for AI ethics as a discipline. First, it helps clarify the ethical dimension of AI tools. Second, it provides guidance to implementing ethical considerations within the design of AI tools. We illustrate these advantages in the context of AI tools in medicine, by showing how ethics-based auditing of AI tools in medicine (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. Do It Yourself Content and the Wisdom of the Crowds.Dallas Amico-Korby, Maralee Harrell & David Danks - 2025 - Erkenntnis:1-29.
    Many social media platforms enable (nearly) anyone to post (nearly) anything. One clear downside of this permissiveness is that many people appear bad at determining who to trust online. Hacks, quacks, climate change deniers, vaccine skeptics, and election deniers have all gained massive followings in these free markets of ideas, and many of their followers seem to genuinely trust them. At the same time, there are many cases in which people seem to reliably determine who to trust online. Consider, for (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  18. Can Chatbots Preserve Our Relationships with the Dead?Stephen M. Campbell, Pengbo Liu & Sven Nyholm - forthcoming - Journal of the American Philosophical Association.
    Imagine that you are given access to an AI chatbot that compellingly mimics the personality and speech of a deceased loved one. If you start having regular interactions with this “thanabot,” could this new relationship be a continuation of the relationship you had with your loved one? And could a relationship with a thanabot preserve or replicate the value of a close human relationship? To the first question, we argue that a relationship with a thanabot cannot be a true continuation (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19. ¿Cómo integrar la ética aplicada a la inteligencia artificial en el currículo? Análisis y recomendaciones desde el feminismo de la ciencia y de datos.G. Arriagada Bruneau & Javiera Arias - 2024 - Revista de filosofía (Chile) 81:137-160.
    Abstract:This article examines the incorporation of applied ethics into artificial intelligence (AI) within Chilean university curricula, emphasizing the urgent need to implement an integrated framework of action. Through a documentary analysis, it becomes evident that most higher education programs do not explicitly include AI ethics courses in their curricula, highlighting the need for institutionalizing this integration systematically. In response, we propose an approach grounded in feminist science and data feminism, advocating for the inclusion of diverse perspectives and experiences in the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  20. Moral parallax: challenges between dignity, AI, and virtual violence.Pablo De la Vega - 2024 - Trayectorias Humanas Trascontinentales 18:116-128.
    Virtual reality is not only a prowess of technological advancement and AI, but also an element that extends the horizons of human existence and complicates the way of approaching various phenomena of the physical world, for example, violence. Its practice in virtuality leads to a series of challenges, especially when virtual reality is considered as genuine reality. This text delves into virtual violence, the influence of AI on it and the problems that its conception implies. To analyze this phenomenon, parallax (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  21. A hybrid marketplace of ideas.Tomer Jordi Chaffer, Dontrail Cotlage & Justin Goldston - manuscript
    The convergence of humans and artificial intelligence (AI) systems introduces new dynamics into the cultural and intellectual landscape. Complementing emerging cultural evolution concepts such as machine culture, AI agents represent a significant techno-sociological development, particularly within the anthropological study of Web3 as a community focused on decentralization through blockchain. Despite their growing presence, the cultural significance of AI agents remains largely unexplored in academic literature. Toward this end, we conceived hybrid netnography, a novel interdisciplinary approach that examines the cultural and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  22. The Better Choice? The Status Quo versus Radical Human Enhancement.Madeleine Hayenhjelm - 2024 - The Journal of Ethics 2024:1-19.
    Can it be rational to favour the status quo when the alternatives to the status quo promise considerable increases in overall value? For instance, can it be rational to favour the status quo over radical human enhancement? A reasonable response to these questions would be to say that it can only be rational if the status quo is indeed the better choice on some measure. In this paper, I argue that it can be rational to favour the status quo over (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  23. AI responsibility gap: not new, inevitable, unproblematic.Huzeyfe Demirtas - 2025 - Ethics and Information Technology 27 (1):1-10.
    Who is responsible for a harm caused by AI, or a machine or system that relies on artificial intelligence? Given that current AI is neither conscious nor sentient, it’s unclear that AI itself is responsible for it. But given that AI acts independently of its developer or user, it’s also unclear that the developer or user is responsible for the harm. This gives rise to the so-called responsibility gap: cases where AI causes a harm, but no one is responsible for (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  24. AI through the looking glass: an empirical study of structural social and ethical challenges in AI.Mark Ryan, Nina De Roo, Hao Wang, Vincent Blok & Can Atik - 2024 - AI and Society 1 (1):1-17.
    This paper examines how professionals (N = 32) working on artificial intelligence (AI) view structural AI ethics challenges like injustices and inequalities beyond individual agents' direct intention and control. This paper answers the research question: What are professionals’ perceptions of the structural challenges of AI (in the agri-food sector)? This empirical paper shows that it is essential to broaden the scope of ethics of AI beyond micro- and meso-levels. While ethics guidelines and AI ethics often focus on the responsibility of (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  25. A Bias Network Approach (BNA) to Encourage Ethical Reflection Among AI Developers.Gabriela Arriagada-Bruneau, Claudia López & Alexandra Davidoff - 2024 - Science and Engineering Ethics 31 (1):1-29.
    We introduce the Bias Network Approach (BNA) as a sociotechnical method for AI developers to identify, map, and relate biases across the AI development process. This approach addresses the limitations of what we call the "isolationist approach to AI bias," a trend in AI literature where biases are seen as separate occurrence linked to specific stages in an AI pipeline. Dealing with these multiple biases can trigger a sense of excessive overload in managing each potential bias individually or promote the (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  26. Book review: Nyholm, Sven (2023): This is technology ethics. An introduction. [REVIEW]Michael W. Schmidt - 2024 - TATuP - Zeitschrift Für Technikfolgenabschätzung in Theorie Und Praxis 33 (3):80–81.
    Have you been surprised by the recent development and diffusion of generative artificial intelligence (AI)? Many institutions of civil society have been caught off guard, which provides them with motivation to think ahead. And as many new plausible pathways of socio-technical development are opening up, a growing interest in technology ethics that addresses our corresponding moral uncertainties is warranted. In Sven Nyholm’s words, “[t]he field of technology ethics is absolutely exploding at the moment” (p. 262), and so the publication of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  27. AI Romance and Misogyny: A Speech Act Analysis.A. G. Holdier & Kelly Weirich - forthcoming - Oxford Intersections: Ai in Society (Relationships).
    Through the lens of feminist speech act theory, this paper argues that artificial intelligence romance systems objectify and subordinate nonvirtual women. AI romance systems treat their users as consumers, offering them relational invulnerability and control over their (usually feminized) digital romantic partner. This paper argues that, though the output of AI chatbots may not generally constitute speech, the framework offered by an AI romance system communicates an unjust perspective on intimate relationships. Through normalizing controlling one’s intimate partner, these systems operate (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  28. Can AI be a subject like us? A Hegelian speculative-philosophical approach.Ermylos Plevrakis - 2024 - Discover Computing 27 (46).
    Recent breakthroughs in the field of artificial intelligence (AI) have sparked a wide public debate on the potentialities of AI, including the prospect to evolve into a subject comparable to humans. While scientists typically avoid directly addressing this question, philosophers usually tend to largely dismiss such a possibility. This article begins by examining the historical and systematic context favoring this inclination. However, it argues that the speculative philosophy of Georg Wilhelm Friedrich Hegel offers a different perspective. Through an exploration of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  29. Critical Provocations for Synthetic Data.Daniel Susser & Jeremy Seeman - 2024 - Surveillance and Society 22 (4):453-459.
    Training artificial intelligence (AI) systems requires vast quantities of data, and AI developers face a variety of barriers to accessing the information they need. Synthetic data has captured researchers’ and industry’s imagination as a potential solution to this problem. While some of the enthusiasm for synthetic data may be warranted, in this short paper we offer critical counterweight to simplistic narratives that position synthetic data as a cost-free solution to every data-access challenge—provocations highlighting ethical, political, and governance issues the use (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. Towards a Unified List of Ethical Principles for Emerging Technologies. An Analysis of Four European Reports on Molecular Biotechnology and Artificial Intelligence,.Elisa Orrù & Joachim Boldt - 2022 - Sustainable Futures 4:1-14.
    Artificial intelligence (AI) and molecular biotechnologies (MB) are among the most promising, but also ethically hotly debated emerging technologies. In both fields, several ethics reports, which invoke lists of ethics principles, have been put forward. These reports and the principles lists are technology specific. This article aims to contribute to the ongoing debate on ethics of emerging technologies by comparatively analysing four European ethics reports from the two technology fields. Adopting a qualitative and in-depth approach, the article highlights how ethics (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. Materialien zu “Ethische Fragen der Künstlichen Intelligenz” (Interview, Paper).Vincent C. Müller (ed.) - 2024 - Göttingen: Philovernetzt.
    1. I am, in fact, a person – Moralischer Status von KI 2. Superintelligenz – Ende oder Rettung der Menschheit? 3. Diskriminierung – KI als Ursache oder Lösung? -/- Authors: Dominik Balg, Larissa Bolte, Anne Burkard, Jan Constantin, Leonard Dung, Jürn Gottschalk, Kerstin Gregor-Gehrmann, Isabelle Guntermann und Katharina Schulz.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  32. (1 other version)Biomimicry and AI-Enabled Automation in Agriculture. Conceptual Engineering for Responsible Innovation.Marco Innocenti - 2025 - Journal of Agricultural and Environmental Ethics 38 (2):1-17.
    This paper aims to engineer the concept of biomimetic design for its application in agricultural technology as an innovation strategy to sustain non-human species’ adaptation to today’s rapid environmental changes. By questioning the alleged intrinsic morality of biomimicry, a formulation of it is sought that goes beyond the sharp distinction between nature as inspiration and the human field of application of biomimetic technologies. After reviewing the main literature on Responsible Innovation, we support Vincent Blok’s “eco-centric” perspective on biomimicry, which considers (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  33. Más allá del algoritmo: oportunidades, retos y ética de la Inteligencia Artificial.Juan David Gutiérrez & Rubén Francisco Manrique (eds.) - forthcoming - Bogotá: Ediciones Uniandes.
  34. Regulating the Spread of Online Misinformation.Étienne Brown - 2021 - In Michael Hannon & Jeroen de Ridder (eds.), The Routledge Handbook of Political Epistemology. New York: Routledge. pp. 214-225.
    Attempts to influence people’s beliefs through misinformation have a long history. In the age of social media, however, there is a growing fear that the circulation of false or misleading claims will be more impactful than ever now that sophisticated technological means are available to those who desire to spread them. Should democratic societies worry about misinformation? If so, is it possible and desirable for them to control its spread by regulating it? This chapter offers an answer to these questions. (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark   5 citations  
  35. Rage Against the Authority Machines: How to Design Artificial Moral Advisors for Moral Enhancement.Ethan Landes, Cristina Voinea & Radu Uszkai - forthcoming - AI and Society:1-12.
    This paper aims to clear up the epistemology of learning morality from Artificial Moral Advisors (AMAs). We start with a brief consideration of what counts as moral enhancement and consider the risk of deskilling raised by machines that offer moral advice. We then shift focus to the epistemology of moral advice and show when and under what conditions moral advice can lead to enhancement. We argue that people’s motivational dispositions are enhanced by inspiring people to act morally, instead of merely (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  36. As máquinas podem cuidar?E. M. Carvalho - 2024 - O Que Nos Faz Pensar 31 (53):6-24.
    Applications and devices of artificial intelligence are increasingly common in the healthcare field. Robots fulfilling some caregiving functions are not a distant future. In this scenario, we must ask ourselves if it is possible for machines to care to the extent of completely replacing human care and if such replacement, if possible, is desirable. In this paper, I argue that caregiving requires know-how permeated by affectivity that is far from being achieved by currently available machines. I also maintain that the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. Virtues for AI.Jakob Ohlhorst - manuscript
    Virtue theory is a natural approach towards the design of artificially intelligent systems, given that the design of artificial intelligence essentially aims at designing agents with excellent dispositions. This has led to a lively research programme to develop artificial virtues. However, this research programme has until now had a narrow focus on moral virtues in an Aristotelian mould. While Aristotelian moral virtue has played a foundational role for the field, it unduly constrains the possibilities of virtue theory for artificial intelligence. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38. AI and Democratic Equality: How Surveillance Capitalism and Computational Propaganda Threaten Democracy.Ashton Black - 2024 - In Bernhard Steffen (ed.), Bridging the Gap Between AI and Reality. Springer Nature. pp. 333-347.
    In this paper, I argue that surveillance capitalism and computational propaganda can undermine democratic equality. First, I argue that two types of resources are relevant for democratic equality: 1) free time, which entails time that is free from systemic surveillance, and 2) epistemic resources. In order for everyone in a democratic system to be equally capable of full political participation, it’s a minimum requirement that these two resources are distributed fairly. But AI that’s used for surveillance capitalism can undermine the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39. AI-enhanced nudging: A Risk-factors Analysis.Marianna Bergamaschi Ganapini & Enrico Panai - forthcoming - American Philosophical Quarterly.
    Artificial intelligent technologies are utilized to provide online personalized recommendations, suggestions, or prompts that can influence people's decision-making processes. We call this AI-enhanced nudging (or AI-nudging for short). Contrary to the received wisdom we claim that AI-enhanced nudging is not necessarily morally problematic. To start assessing the risks and moral import of AI-nudging we believe that we should adopt a risk-factor analysis: we show that both the level of risk and possibly the moral value of adopting AI-nudging ultimately depend on (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  40. The case for human–AI interaction as system 0 thinking.Marianna Bergamaschi Ganapini - 2024 - Nature Human Behaviour 8.
    The rapid integration of these artificial intelligence (AI) tools into our daily lives is reshaping how we think and make decisions. We propose that data-driven AI systems, by transcending individual artefacts and interfacing with a dynamic, multiartefact ecosystem, constitute a distinct psychological system. We call this ‘system 0’, and position it alongside Kahneman’s system 1 (fast, intuitive thinking) and system 2 (slow, analytical thinking).System 0 represents the outsourcing of certain cognitive tasks to AI, which can process vast amounts of data (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  41. Emotional Cues and Misplaced Trust in Artificial Agents.Joseph Masotti - forthcoming - In Henry Shevlin (ed.), AI in Society: Relationships (Oxford Intersections). Oxford University Press.
    This paper argues that the emotional cues exhibited by AI systems designed for social interaction may lead human users to hold misplaced trust in such AI systems, and this poses a substantial problem for human-AI relationships. It begins by discussing the communicative role of certain emotions relevant to perceived trustworthiness. Since displaying such emotions is a reliable indicator of trustworthiness in humans, we use such emotions to assess agents’ trustworthiness according to certain generalizations of folk psychology. Our tendency to engage (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  42. Publishing Robots.Nicholas Hadsell, Rich Eva & Kyle Huitt - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    If AI can write an excellent philosophy paper, we argue that philosophy journals should strongly consider publishing that paper. After all, AI stands to make significant contributions to ongoing projects in some subfields, and it benefits the world of philosophy for those contributions to be published in journals, the primary purpose of which is to disseminate significant contributions to philosophy. We also propose the Sponsorship Model of AI journal refereeing to mitigate any costs associated with our view. This model requires (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  43. A minimal dose of self-reflective humor in Wild Wise Weird: The Kingfisher story collection.Manh-Tung Ho - manuscript
    In this essay, I review one of my beloved fictional titles, Wild Wise Weird: The Kingfisher Story collection. The minimal sense of humor and satire in storytelling of Wild Wise Weird are sure to bring readers smiles, better yet, moments of quiet reflection, a much under-appreciated remedy in the world driven almost insane with the abundance of information co-created with AI technologies. I hope to deliver justice to the book.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  44. Speciesism in Natural Language Processing Research.Masashi Takeshita & Rafal Rzepka - forthcoming - AI and Ethics.
    Natural Language Processing (NLP) research on AI Safety and social bias in AI has focused on safety for humans and social bias against human minorities. However, some AI ethicists have argued that the moral significance of nonhuman animals has been ignored in AI research. Therefore, the purpose of this study is to investigate whether there is speciesism, i.e., discrimination against nonhuman animals, in NLP research. First, we explain why nonhuman animals are relevant in NLP research. Next, we survey the findings (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  45. Multimodal Artificial Intelligence in Medicine.Joshua August Skorburg - forthcoming - Kidney360.
    Traditional medical Artificial Intelligence models, approved for clinical use, restrict themselves to single-modal data e.g. images only, limiting their applicability in the complex, multimodal environment of medical diagnosis and treatment. Multimodal Transformer Models in healthcare can effectively process and interpret diverse data forms such as text, images, and structured data. They have demonstrated impressive performance on standard benchmarks like USLME question banks and continue to improve with scale. However, the adoption of these advanced AI models is not without challenges. While (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46. Artificial Intelligence, Creativity, and the Precarity of Human Connection.Lindsay Brainard - forthcoming - Oxford Intersections: Ai in Society.
    There is an underappreciated respect in which the widespread availability of generative artificial intelligence (AI) models poses a threat to human connection. My central contention is that human creativity is especially capable of helping us connect to others in a valuable way, but the widespread availability of generative AI models reduces our incentives to engage in various sorts of creative work in the arts and sciences. I argue that creative endeavors must be motivated by curiosity, and so they must disclose (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  47. Automated Influence and Value Collapse.Dylan J. White - 2024 - American Philosophical Quarterly 61 (4):369-386.
    Automated influence is one of the most pervasive applications of artificial intelligence in our day-to-day lives, yet a thoroughgoing account of its associated individual and societal harms is lacking. By far the most widespread, compelling, and intuitive account of the harms associated with automated influence follows what I call the control argument. This argument suggests that users are persuaded, manipulated, and influenced by automated influence in a way that they have little or no control over. Based on evidence about the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  48. Memory and Mimesis in Our Relationships with Posthumous Avatars.Michael Cholbi - forthcoming - In Henry Shevlin (ed.), AI in Society: Relationships (Oxford Intersections). Oxford University Press.
    Critics have raised many moral and legal concerns about posthumous digital avatars. Here my focus instead falls on whether they are likely to enable the bonds with the dead that users apparently yearn for. I conclude that though posthumous avatars can have short-term therapeutic benefits in replicating “habits of intimacy” with the dead, users’ expectations for sustaining long-term bonds with the deceased via posthumous avatars are unlikely to be fulfilled. Posthumous avatars are unlikely to foster the construction of valued memories (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark   1 citation  
  49. Sự gia tăng của AI tạo sinh và những rủi ro tiềm ẩn cho con người.Hoang Tung-Duong, Dang Tuan-Dung & Manh-Tung Ho - 2024 - Tạp Chí Thông Tin Và Truyền Thông 9 (9/2024):66-73.
    Sự xuật hiện của các công cụ AI tạo sinh trên nền tảng các mô hình ngôn ngữ lớn (LLMs) đã đem đến một công cụ mới cho con người, đặc biệt là trong các ngành sư phạm, báo chí, nhưng chúng cũng đem đến nhiều vấn đề Trong bài viết này, nhóm tác giả sẽ chỉ ra những những bất cập mới xuất hiện hoặc những vấn đề đã tồn tại nhưng có nguy cơ được đẩy lên cao hơn (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  50. Artificial agents: responsibility & control gaps.Herman Veluwenkamp & Frank Hindriks - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Artificial agents create significant moral opportunities and challenges. Over the last two decades, discourse has largely focused on the concept of a ‘responsibility gap.’ We argue that this concept is incoherent, misguided, and diverts attention from the core issue of ‘control gaps.’ Control gaps arise when there is a discrepancy between the causal control an agent exercises and the moral control it should possess or emulate. Such gaps present moral risks, often leading to harm or ethical violations. We propose a (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 3052