Machine Ethics

Edited by Jeffrey White (Okinawa Institute Of Science And Technology, Universidade Nova de Lisboa)
About this topic
Summary In the early 2000s, James Moor set out four classes of ethical machine, advising that the near-term focus of machine ethics research should be on "explicit ethical agents", agents designed from an understanding of human theoretical ethics to operate according with these theoretical principles. Above this class, the ultimate aim of inquiry into machine ethics is understanding human morality and natural science well enough to engineer a fully autonomous, moral machine. This sub-category focuses on supporting this inquiry. Other work on other sorts of computer applications and their ethical impacts appear in different categories, including Ethics of Artificial Intelligence, Moral Status of Artificial Systems, and also Robot Ethics, Algorithmic Fairness, Computer Ethics, and others. Machine ethics is ethics, and it is also a study of machines. Machine ethicists wonder why people, human beings, other organisms, do what they do when they do it, and what makes these things the right things to do - they are ethicists. In addition, machine ethicists work out how to articulate such processes in an independent artificial system (rather than by parenting a biological child, or training a human minion, as traditional alternatives). So, machine ethics researchers engage directly with rapidly advancing work in cognitive science and psychology alongside that in robotics and AI, applied ethics such as medical ethics and philosophy of mind, computer modeling and data science, and so on. Drawing from so many disciplines with all of these advancing rapidly and with their own impacts, machine ethics is in the middle of a maelstrom of current research activity. Advances in materials science and physical chemistry leverage advances in cognitive science and neurology which feed advances in AI and robotics, including in regards to its interpretability for illustration. Putting this all together is the challenge for the machine ethics researcher. This sub-category is intended to support efforts to meet this challenge.  
Key works Allen et al 2005Wallach et al 2008Tonkens 2012Tonkens 2009Müller & Bostrom 2014White 2013White 2015
Introductions Anderson & Anderson 2007, Segun 2021, Powers 2011, Moor 2006
Related

Contents
517 found
Order:
1 — 50 / 517
  1. Ética e Segurança da Inteligência Artificial: ferramentas práticas para se criar "bons" modelos.Nicholas Kluge Corrêa - manuscript
    A AI Robotics Ethics Society (AIRES) é uma organização sem fins lucrativos fundada em 2018 por Aaron Hui, com o objetivo de se promover a conscientização e a importância da implementação e regulamentação ética da AI. A AIRES é hoje uma organização com capítulos em universidade como UCLA (Los Angeles), USC (University of Southern California), Caltech (California Institute of Technology), Stanford University, Cornell University, Brown University e a Pontifícia Universidade Católica do Rio Grande do Sul (Brasil). AIRES na PUCRS é (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2. Can a robot lie?Markus Kneer - manuscript
    The potential capacity for robots to deceive has received considerable attention recently. Many papers focus on the technical possibility for a robot to engage in deception for beneficial purposes (e.g. in education or health). In this short experimental paper, I focus on a more paradigmatic case: Robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment with 399 participants which explores the following three (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  3. Beneficent Intelligence: A Capability Approach to Modeling Benefit, Assistance, and Associated Moral Failures through AI Systems.Alex John London & Hoda Heidari - manuscript
    The prevailing discourse around AI ethics lacks the language and formalism necessary to capture the diverse ethical concerns that emerge when AI systems interact with individuals. Drawing on Sen and Nussbaum's capability approach, we present a framework formalizing a network of ethical concepts and entitlements necessary for AI systems to confer meaningful benefit or assistance to stakeholders. Such systems enhance stakeholders' ability to advance their life plans and well-being while upholding their fundamental rights. We characterize two necessary conditions for morally (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and hold ourselves accountable (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. First human upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. Literature Review: What Artificial General Intelligence Safety Researchers Have Written About the Nature of Human Values.Alexey Turchin & David Denkenberger - manuscript
    Abstract: The field of artificial general intelligence (AGI) safety is quickly growing. However, the nature of human values, with which future AGI should be aligned, is underdefined. Different AGI safety researchers have suggested different theories about the nature of human values, but there are contradictions. This article presents an overview of what AGI safety researchers have written about the nature of human values, up to the beginning of 2019. 21 authors were overviewed, and some of them have several theories. A (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. Autonomous Reboot: the challenges of artificial moral agency and the ends of Machine Ethics.Jeffrey White - manuscript
    Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe - both "rational" and "free" - while also satisfying perceived prerogatives of Machine Ethics to create AMAs that are perfectly, not merely reliably, ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach, who have pushed for the reinvention of traditional ethics in order to avoid "ethical nihilism" due to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. Artificial Intelligence Ethics and Safety: practical tools for creating "good" models.Nicholas Kluge Corrêa -
    The AI Robotics Ethics Society (AIRES) is a non-profit organization founded in 2018 by Aaron Hui to promote awareness and the importance of ethical implementation and regulation of AI. AIRES is now an organization with chapters at universities such as UCLA (Los Angeles), USC (University of Southern California), Caltech (California Institute of Technology), Stanford University, Cornell University, Brown University, and the Pontifical Catholic University of Rio Grande do Sul (Brazil). AIRES at PUCRS is the first international chapter of AIRES, and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9. Does Predictive Sentencing Make Sense?Clinton Castro, Alan Rubel & Lindsey Schwartz - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper examines the practice of using predictive systems to lengthen the prison sentences of convicted persons when the systems forecast a higher likelihood of re-offense or re-arrest. There has been much critical discussion of technologies used for sentencing, including questions of bias and opacity. However, there hasn’t been a discussion of whether this use of predictive systems makes sense in the first place. We argue that it does not by showing that there is no plausible theory of punishment that (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. ChatGPT: towards an AI subjectivity.Kristian D'Amato - forthcoming - AI and Society.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  11. Norms and Causation in Artificial Morality.Laura Fearnley - forthcoming - Joint Proceedings of Acm Iui:1-4.
    There has been an increasing interest into how to build Artificial Moral Agents (AMAs) that make moral decisions on the basis of causation rather than mere correction. One promising avenue for achieving this is to use a causal modelling approach. This paper explores an open and important problem with such an approach; namely, the problem of what makes a causal model an appropriate model. I explore why we need to establish criteria for what makes a model appropriate, and offer-up such (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  13. Make Them Rare or Make Them Care: Artificial Intelligence and Moral Cost-Sharing.Blake Hereth & Nicholas Evans - forthcoming - In Daniel Schoeni, Tobias Vestner & Kevin Govern (eds.), Ethical Dilemmas in the Global Defense Industry. Oxford University Press.
    The use of autonomous weaponry in warfare has increased substantially over the last twenty years and shows no sign of slowing. Our chapter raises a novel objection to the implementation of autonomous weapons, namely, that they eliminate moral cost-sharing. To grasp the basics of our argument, consider the case of uninhabited aerial vehicles that act autonomously (i.e., LAWS). Imagine that a LAWS terminates a military target and that five civilians die as a side effect of the LAWS bombing. Because LAWS (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  14. Machine morality, moral progress, and the looming environmental disaster.Ben Kenward & Thomas Sinclair - forthcoming - Cognitive Computation and Systems.
    The creation of artificial moral systems requires us to make difficult choices about which of varying human value sets should be instantiated. The industry-standard approach is to seek and encode moral consensus. Here we argue, based on evidence from empirical psychology, that encoding current moral consensus risks reinforcing current norms, and thus inhibiting moral progress. However, so do efforts to encode progressive norms. Machine ethics is thus caught between a rock and a hard place. The problem is particularly acute when (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  15. Digital Well-Being and Manipulation Online.Michael Klenk - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach. Springer.
    Social media use is soaring globally. Existing research of its ethical implications predominantly focuses on the relationships amongst human users online, and their effects. The nature of the software-to-human relationship and its impact on digital well-being, however, has not been sufficiently addressed yet. This paper aims to close the gap. I argue that some intelligent software agents, such as newsfeed curator algorithms in social media, manipulate human users because they do not intend their means of influence to reveal the user’s (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  16. Moral Disagreement and Artificial Intelligence.Pamela Robinson - forthcoming - AI and Society:1-14.
    Artificially intelligent systems will be used to make increasingly important decisions about us. Many of these decisions will have to be made without universal agreement about the relevant moral facts. For other kinds of disagreement, it is at least usually obvious what kind of solution is called for. What makes moral disagreement especially challenging is that there are three different ways of handling it. Moral solutions apply a moral theory or related principles and largely ignore the details of the disagreement. (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  17. Digital suffering: why it's a problem and how to prevent it.Bradford Saad & Adam Bradley - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    As ever more advanced digital systems are created, it becomes increasingly likely that some of these systems will be digital minds, i.e. digital subjects of experience. With digital minds comes the risk of digital suffering. The problem of digital suffering is that of mitigating this risk. We argue that the problem of digital suffering is a high stakes moral problem and that formidable epistemic obstacles stand in the way of solving it. We then propose a strategy for solving it: Access (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  18. Handbook of Research on Machine Ethics and Morality.Steven John Thompson (ed.) - forthcoming - Hershey, PA: IGI-Global.
    This book is dedicated to expert research topics, and analyses of ethics-related inquiry, at the machine ethics and morality level: key players, benefits, problems, policies, and strategies. Gathering some of the leading voices that recognize and understand the complexities and intricacies of human-machine ethics provides a resourceful compendium to be accessed by decision-makers and theorists concerned with identification and adoption of human-machine ethics initiatives, leading to needed policy adoption and reform for human-machine entities, their technologies, and their societal and legal (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  19. Augustine and an artificial soul.Jeffrey White - forthcoming - Embodied Intelligence 2023.
    Prior work proposes a view of development of purpose and source of meaning in life as a more or less temporally distal project ideal self-situation in terms of which intermediate situations are experienced and prospects evaluated. This work considers Augustine on ensoulment alongside current work into self as adapted routines to common social regularities of the sort that Augustine found deficient. How can we account for such diversity of self-reported value orientation in terms of common structural dynamics differently developed, embodied (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  20. Dubito Ergo Sum: Exploring AI Ethics.Viktor Dörfler & Giles Cuthbert - 2024 - Hicss 57: Hawaii International Conference on System Sciences, Honolulu, Hi.
    We paraphrase Descartes’ famous dictum in the area of AI ethics where the “I doubt and therefore I am” is suggested as a necessary aspect of morality. Therefore AI, which cannot doubt itself, cannot possess moral agency. Of course, this is not the end of the story. We explore various aspects of the human mind that substantially differ from AI, which includes the sensory grounding of our knowing, the act of understanding, and the significance of being able to doubt ourselves. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21. Ethical Preferences in the Digital World: The EXOSOUL Questionnaire.Costanza Alfieri, Donatella Donati, Simone Gozzano, Lorenzo Greco & Marco Segala - 2023 - In Paul Lukowicz, Sven Mayer, Janin Koch, John Shawe-Taylor & Ilaria Tiddi (eds.), Ebook: HHAI 2023: Augmenting Human Intellect. Amsterdam: IOS Press. pp. 290-99.
  22. Mental time-travel, semantic flexibility, and A.I. ethics.Marcus Arvan - 2023 - AI and Society 38 (6):2577-2596.
    This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ GenEth. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  23. Artificial Dispositions: Investigating Ethical and Metaphysical Issues.William A. Bauer & Anna Marmodoro (eds.) - 2023 - Bloomsbury.
    We inhabit a world not only full of natural dispositions independent of human design, but also artificial dispositions created by our technological prowess. How do these dispositions, found in automation, computation, and artificial intelligence applications, differ metaphysically from their natural counterparts? This collection investigates artificial dispositions: what they are, the roles they play in artificial systems, and how they impact our understanding of the nature of reality, the structure of minds, and the ethics of emerging technologies. It is divided into (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  24. When Something Goes Wrong: Who is Responsible for Errors in ML Decision-making?Andrea Berber & Sanja Srećković - 2023 - AI and Society 38 (2):1-13.
    Because of its practical advantages, machine learning (ML) is increasingly used for decision-making in numerous sectors. This paper demonstrates that the integral characteristics of ML, such as semi-autonomy, complexity, and non-deterministic modeling have important ethical implications. In particular, these characteristics lead to a lack of insight and lack of comprehensibility, and ultimately to the loss of human control over decision-making. Errors, which are bound to occur in any decision-making process, may lead to great harm and human rights violations. It is (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  25. A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm.Marc Champagne & Ryan Tonkens - 2023 - Science and Engineering Ethics 29 (4):1-26.
    As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke’s (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate—and defend—what is known as the “blank check” proposal. According to this proposal, a person activating a robot could (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  26. The seven troubles with norm-compliant robots.Tom N. Coggins & Steffen Steinert - 2023 - Ethics and Information Technology 25 (2):1-15.
    Many researchers from robotics, machine ethics, and adjacent fields seem to assume that norms represent good behavior that social robots should learn to benefit their users and society. We would like to complicate this view and present seven key troubles with norm-compliant robots: (1) norm biases, (2) paternalism (3) tyrannies of the majority, (4) pluralistic ignorance, (5) paths of least resistance, (6) outdated norms, and (7) technologically-induced norm change. Because discussions of why norm-compliant robots can be problematic are noticeably absent (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27. Current cases of AI misalignment and their implications for future risks.Leonard Dung - 2023 - Synthese 202 (5):1-23.
    How can one build AI systems such that they pursue the goals their designers want them to pursue? This is the alignment problem. Numerous authors have raised concerns that, as research advances and systems become more powerful over time, misalignment might lead to catastrophic outcomes, perhaps even to the extinction or permanent disempowerment of humanity. In this paper, I analyze the severity of this risk based on current instances of misalignment. More specifically, I argue that contemporary large language models and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Encountering Artificial Intelligence: Ethical and Anthropological Reflections.Matthew J. Gaudet, Paul Scherz, Noreen Herzfeld, Jordan Joseph Wales, Nathan Colaner, Jeremiah Coogan, Mariele Courtois, Brian Cutter, David E. DeCosse, Justin Charles Gable, Brian Green, James Kintz, Cory Andrew Labrecque, Catherine Moon, Anselm Ramelow, John P. Slattery, Ana Margarita Vega, Luis G. Vera, Andrea Vicini & Warren von Eschenbach - 2023 - Eugene, OR: Pickwick Press.
    What does it mean to consider the world of AI through a Christian lens? Rapid developments in AI continue to reshape society, raising new ethical questions and challenging our understanding of the human person. Encountering Artificial Intelligence draws on Pope Francis’ discussion of a culture of encounter and broader themes in Catholic social thought in order to examine how current AI applications affect human relationships in various social spheres and offers concrete recommendations for better implementation. The document also explores questions (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  29. Machine Ethics: Do Androids Dream of Being Good People?Gonzalo Génova, Valentín Moreno & M. Rosario González - 2023 - Science and Engineering Ethics 29 (2):1-17.
    Is ethics a computable function? Can machines learn ethics like humans do? If teaching consists in no more than programming, training, indoctrinating… and if ethics is merely following a code of conduct, then yes, we can teach ethics to algorithmic machines. But if ethics is not merely about following a code of conduct or about imitating the behavior of others, then an approach based on computing outcomes, and on the reduction of ethics to the compilation and application of a set (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  30. Embodied Experience in Socially Participatory Artificial Intelligence.Mark Graves - 2023 - Zygon (4):928-951.
    As artificial intelligence (AI) becomes progressively more engaged with society, its shift from technical tool to participating in society raises questions about AI personhood. Drawing upon developmental psychology and systems theory, a mediating structure for AI proto-personhood is defined analogous to an early stage of human development. The proposed AI bridges technical, psychological, and theological perspectives on near-future AI and is structured by its hardware, software, computational, and sociotechnical systems through which it experiences its world as embodied (even for putatively (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31. Moral Attribution in Moral Turing Test.Mubarak Hussain - 2023 - International Conference on Computer Ethics: Philosophical Enquiry May 16-18, 2023 Illinois Institute of Technology, Chicago, Usa.
    This paper argues Moral Turing Test (MTT) developed by Allen et al. for evaluating morality in AI systems is designed inaptly. Different versions of the MTT focus on the conversational ability of an agent but not the performance of morally significant actions. Arnold and Scheutz also argue against the MTT and state that without focusing on the performance of morally significant actions, the MTT is insufficient. Morality is mainly about morally relevant actions because it does not matter how good a (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  32. The Moral Status of AGI-enabled Robots: A Functionality-Based Analysis.Mubarak Hussain - 2023 - Symposion: Theoretical and Applied Inquiries in Philosophy and Social Sciences 10 (1):105-127.
    For a long time, researchers of Artificial Intelligence (AI) and futurists have hypothesized that the developed Artificial General Intelligence (AGI) systems can execute intellectual and behavioral tasks similar to human beings. However, there are two possible concerns regarding the emergence of AGI systems and their moral status, namely: 1) is it possible to grant moral status to the AGI-enabled robots similar to humans? 2) if it is (im)possible, then under what conditions do such robots (fail to) achieve moral status similar (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33. Implementing AI Ethics in the Design of AI-assisted Rescue Robots.Désirée Martin, Michael W. Schmidt & Rafaela Hillerbrand - 2023 - Ieee International Symposium on Ethics in Engineering, Science, and Technology (Ethics).
    For implementing ethics in AI technology, there are at least two major ethical challenges. First, there are various competing AI ethics guidelines and consequently there is a need for a systematic overview of the relevant values that should be considered. Second, if the relevant values have been identified, there is a need for an indicator system that helps assessing if certain design features are positively or negatively affecting their implementation. This indicator system will vary with regard to specific forms of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  34. AI, alignment, and the categorical imperative.Fritz McDonald - 2023 - AI and Ethics 3:337-344.
    Tae Wan Kim, John Hooker, and Thomas Donaldson make an attempt, in recent articles, to solve the alignment problem. As they define the alignment problem, it is the issue of how to give AI systems moral intelligence. They contend that one might program machines with a version of Kantian ethics cast in deontic modal logic. On their view, machines can be aligned with human values if such machines obey principles of universalization and autonomy, as well as a deontic utilitarian principle. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35. Accountability in Artificial Intelligence: What It Is and How It Works.Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society:1-12.
    Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, forum, standards, process, (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36. The Possibilities of Machine Morality.Jonathan Pengelly - 2023 - Dissertation, Victoria University of Wellington
    This thesis shows morality to be broader and more diverse than its human instantiation. It uses the idea of machine morality to argue for this position. Specifically, it contrasts the possibilities open to humans with those open to machines to meaningfully engage with the moral domain. -/- This contrast identifies distinctive characteristics of human morality, which are not fundamental to morality itself, but constrain our thinking about morality and its possibilities. It also highlights the inherent potential of machine morality to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. Authenticity and co-design: On responsibly creating relational robots for children.Milo Phillips-Brown, Marion Boulicault, Jacqueline Kory-Westland, Stephanie Nguyen & Cynthia Breazeal - 2023 - In Mizuko Ito, Remy Cross, Karthik Dinakar & Candice Odgers (eds.), Algorithmic Rights and Protections for Children. Cambridge, MA: MIT Press. pp. 85-121.
    Meet Tega. Blue, fluffy, and AI-enabled, Tega is a relational robot: a robot designed to form relationships with humans. Created to aid in early childhood education, Tega talks with children, plays educational games with them, solves puzzles, and helps in creative activities like making up stories and drawing. Children are drawn to Tega, describing him as a friend, and attributing thoughts and feelings to him ("he's kind," "if you just left him here and nobody came to play with him, he (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  38. Immune moral models? Pro-social rule breaking as a moral enhancement approach for ethical AI.Rajitha Ramanayake, Philipp Wicke & Vivek Nallur - 2023 - AI and Society 38 (2):801-813.
    We are moving towards a future where Artificial Intelligence (AI) based agents make many decisions on behalf of humans. From healthcare decision-making to social media censoring, these agents face problems, and make decisions with ethical and societal implications. Ethical behaviour is a critical characteristic that we would like in a human-centric AI. A common observation in human-centric industries, like the service industry and healthcare, is that their professionals tend to break rules, if necessary, for pro-social reasons. This behaviour among humans (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39. Moral machines: an impossible challenge? (Macchine morali: una sfida impossibile?).Luca Alberto Rappuoli - 2023 - Scintille 1 (1):71-74.
    This short essay offers an overview of the philosophical difficulties involved in answering the question 'Is it possible to develop an AI system capable of moral action?'.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  40. The Soldier’s Share: Considering Narrow Responsibility for Lethal Autonomous Weapons.Kevin Schieman - 2023 - Journal of Military Ethics (3):228-245.
    Robert Sparrow (among others) claims that if an autonomous weapon were to commit a war crime, it would cause harm for which no one could reasonably be blamed. Since no one would bear responsibility for the soldier’s share of killing in such cases, he argues that they would necessarily violate the requirements of jus in bello, and should be prohibited by international law. I argue this view is mistaken and that our moral understanding of war is sufficient to determine blame (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Human-Centered AI: The Aristotelian Approach.Jacob Sparks & Ava Wright - 2023 - Divus Thomas 126 (2):200-218.
    As we build increasingly intelligent machines, we confront difficult questions about how to specify their objectives. One approach, which we call human-centered, tasks the machine with the objective of learning and satisfying human objectives by observing our behavior. This paper considers how human-centered AI should conceive the humans it is trying to help. We argue that an Aristotelian model of human agency has certain advantages over the currently dominant theory drawn from economics.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  42. Oggetti buoni: Per una tecnologia sensibile ai valori.Steven Umbrello - 2023 - Rome: Fandango.
    Non possiamo immaginare un mondo senza le nostre tecnologie e i nostri strumenti. In molti modi, le nostre tecnologie sono ciò che ci definisce come esseri umani, separandoci dal resto del regno animale. Tuttavia, pensare alle nostre tecnologie come semplici strumenti, strumenti che possono essere usati nel bene o nel male, ci rende vulnerabili agli effetti sistemici e duraturi che le tecnologie hanno sulla nostra società, sui comportamenti, e sulle generazioni future. Oggetti Buoni esplora come le tecnologie incarnano i valori (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43. AI, Control and Unintended Consequences: The Need for Meta-Values.Ibo van de Poel - 2023 - In Albrecht Fritzsche & Andrés Santa-María (eds.), Rethinking Technology and Engineering: Dialogues Across Disciplines and Geographies. Springer Verlag. pp. 117-129.
    Due to their self-learning and evolutionary character, AI (Artificial Intelligence) systems are more prone to unintended consequences and more difficult to control than traditional sociotechnical systems. To deal with this, machine ethicists have proposed to build moral (reasoning) capacities into AI systems by designing artificial moral agents. I argue that this may well lead to more, rather than less, unintended consequences and may decrease, rather than increase, human control over such systems. Instead, I suggest, we should bring AI systems under (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  44. Bias Optimizers.Damien P. Williams - 2023 - American Scientist 111 (4):204-207.
  45. A Kantian Course Correction for Machine Ethics.Ava Thomas Wright - 2023 - In Jonathan Tsou & Gregory Robson (eds.), Technology Ethics: A Philosophical Introduction and Readings. New York: Routledge. pp. 141-151.
    The central challenge of “machine ethics” is to build autonomous machine agents that act morally rightly. But how can we build autonomous machine agents that act morally rightly, given reasonable disputes over what is right and wrong in particular cases? In this chapter, I argue that Immanuel Kant’s political philosophy can provide an important part of the answer.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46. Quantum of Wisdom.Colin Allen & Brett Karlan - 2022 - In Greg Viggiano (ed.), Artificial Intelligence and Quantum Computing: Social, Economic, and Policy Impacts. Hoboken, NJ: Wiley-Blackwell. pp. 157-166.
    Practical quantum computing devices and their applications to AI in particular are presently mostly speculative. Nevertheless, questions about whether this future technology, if achieved, presents any special ethical issues are beginning to take shape. As with any novel technology, one can be reasonably confident that the challenges presented by "quantum AI" will be a mixture of something new and something old. Other commentators (Sevilla & Moreno 2019), have emphasized continuity, arguing that quantum computing does not substantially affect approaches to value (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  47. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral error; and (3) 'Human-Like (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  48. From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  49. Autonomous Vehicles, Business Ethics, and Risk Distribution in Hybrid Traffic.Brian Berkey - 2022 - In Ryan Jenkins, David Cerny & Tomas Hribek (eds.), Autonomous Vehicle Ethics: The Trolley Problem and Beyond. New York, NY, USA: pp. 210-228.
    In this chapter, I argue that in addition to the generally accepted aim of reducing traffic-related injuries and deaths as much as possible, a principle of fairness in the distribution of risk should inform our thinking about how firms that produce autonomous vehicles ought to program them to respond in conflict situations involving human-driven vehicles. This principle, I claim, rules out programming autonomous vehicles to systematically prioritize the interests of their occupants over those of the occupants of other vehicles, including (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50. Extending the Is-ought Problem to Top-down Artificial Moral Agents.Robert James M. Boyles - 2022 - Symposion: Theoretical and Applied Inquiries in Philosophy and Social Sciences 9 (2):171–189.
    This paper further cashes out the notion that particular types of intelligent systems are susceptible to the is-ought problem, which espouses the thesis that no evaluative conclusions may be inferred from factual premises alone. Specifically, it focuses on top-down artificial moral agents, providing ancillary support to the view that these kinds of artifacts are not capable of producing genuine moral judgements. Such is the case given that machines built via the classical programming approach are always composed of two parts, namely: (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 517