Switch to: References

Add citations

You must login to add citations.
  1. A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Robots of Just War: A Legal Perspective.Ugo Pagallo - 2011 - Philosophy and Technology 24 (3):307-323.
    In order to present a hopefully comprehensive framework of what is the stake of the growing use of robot soldiers, the paper focuses on: the different impact of robots on legal systems, e.g., contractual obligations and tort liability; how robots affect crucial notions as causality, predictability and human culpability in criminal law and, finally, specific hypotheses of robots employed in “just wars.” By using the traditional distinction between causes that make wars just and conduct admissible on the battlefield, the aim (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Anthropological Crisis or Crisis in Moral Status: a Philosophy of Technology Approach to the Moral Consideration of Artificial Intelligence.Joan Llorca Albareda - 2024 - Philosophy and Technology 37 (1):1-26.
    The inquiry into the moral status of artificial intelligence (AI) is leading to prolific theoretical discussions. A new entity that does not share the material substrate of human beings begins to show signs of a number of properties that are nuclear to the understanding of moral agency. It makes us wonder whether the properties we associate with moral status need to be revised or whether the new artificial entities deserve to enter within the circle of moral consideration. This raises the (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Morality of Artificial Friends in Ishiguro’s Klara and the Sun.Jakob Stenseke - 2022 - Journal of Science Fiction and Philosophy 5.
    Can artificial entities be worthy of moral considerations? Can they be artificial moral agents (AMAs), capable of telling the difference between good and evil? In this essay, I explore both questions—i.e., whether and to what extent artificial entities can have a moral status (“the machine question”) and moral agency (“the AMA question”)—in light of Kazuo Ishiguro’s 2021 novel Klara and the Sun. I do so by juxtaposing two prominent approaches to machine morality that are central to the novel: the (1) (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.
    Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a wideningresponsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial moral (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be appropriately ascribed (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • A Vindication of the Rights of Machines.David J. Gunkel - 2014 - Philosophy and Technology 27 (1):113-132.
    This essay responds to the machine question in the affirmative, arguing that artifacts, like robots, AI, and other autonomous systems, can no longer be legitimately excluded from moral consideration. The demonstration of this thesis proceeds in four parts or movements. The first and second parts approach the subject by investigating the two constitutive components of the ethical relationship—moral agency and patiency. In the process, they each demonstrate failure. This occurs not because the machine is somehow unable to achieve what is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   47 citations  
  • In AI We Trust: Ethics, Artificial Intelligence, and Reliability.Mark Ryan - 2020 - Science and Engineering Ethics 26 (5):2749-2767.
    One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   41 citations  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   44 citations  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   45 citations  
  • The artificial view: toward a non-anthropocentric account of moral patiency.Fabio Tollon - 2020 - Ethics and Information Technology 23 (2):147-155.
    In this paper I provide an exposition and critique of the Organic View of Ethical Status, as outlined by Torrance (2008). A key presupposition of this view is that only moral patients can be moral agents. It is claimed that because artificial agents lack sentience, they cannot be proper subjects of moral concern (i.e. moral patients). This account of moral standing in principle excludes machines from participating in our moral universe. I will argue that the Organic View operationalises anthropocentric intuitions (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Do Others Mind? Moral Agents Without Mental States.Fabio Tollon - 2021 - South African Journal of Philosophy 40 (2):182-194.
    As technology advances and artificial agents (AAs) become increasingly autonomous, start to embody morally relevant values and act on those values, there arises the issue of whether these entities should be considered artificial moral agents (AMAs). There are two main ways in which one could argue for AMA: using intentional criteria or using functional criteria. In this article, I provide an exposition and critique of “intentional” accounts of AMA. These accounts claim that moral agency should only be accorded to entities (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Levels of Trust in the Context of Machine Ethics.Herman T. Tavani - 2015 - Philosophy and Technology 28 (1):75-90.
    Are trust relationships involving humans and artificial agents possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani :39–51, 2011), I argue that the “short answer” to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents and AAs. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Can we Develop Artificial Agents Capable of Making Good Moral Decisions?: Wendell Wallach and Colin Allen: Moral Machines: Teaching Robots Right from Wrong, Oxford University Press, 2009, xi + 273 pp, ISBN: 978-0-19-537404-9.Herman T. Tavani - 2011 - Minds and Machines 21 (3):465-474.
  • AI ethics and the banality of evil.Payman Tajalli - 2021 - Ethics and Information Technology 23 (3):447-454.
    In this paper, I draw on Hannah Arendt’s notion of ‘banality of evil’ to argue that as long as AI systems are designed to follow codes of ethics or particular normative ethical theories chosen by us and programmed in them, they are Eichmanns destined to commit evil. Since intelligence alone is not sufficient for ethical decision making, rather than strive to program AI to determine the right ethical decision based on some ethical theory or criteria, AI should be concerned with (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Robowarfare: Can robots be more ethical than humans on the battlefield? [REVIEW]John P. Sullins - 2010 - Ethics and Information Technology 12 (3):263-275.
    Telerobotically operated and semiautonomous machines have become a major component in the arsenals of industrial nations around the world. By the year 2015 the United States military plans to have one-third of their combat aircraft and ground vehicles robotically controlled. Although there are many reasons for the use of robots on the battlefield, perhaps one of the most interesting assertions are that these machines, if properly designed and used, will result in a more just and ethical implementation of warfare. This (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • Moral Judgments in the Age of Artificial Intelligence.Yulia W. Sullivan & Samuel Fosso Wamba - 2022 - Journal of Business Ethics 178 (4):917-943.
    The current research aims to answer the following question: “who will be held responsible for harm involving an artificial intelligence system?” Drawing upon the literature on moral judgments, we assert that when people perceive an AI system’s action as causing harm to others, they will assign blame to different entity groups involved in an AI’s life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing upon the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles.Bradley Jay Strawser - 2010 - Journal of Military Ethics 9 (4):342-368.
    A variety of ethical objections have been raised against the military employment of uninhabited aerial vehicles (UAVs, drones). Some of these objections are technological concerns over UAVs abilities’ to function on par with their inhabited counterparts. This paper sets such concerns aside and instead focuses on supposed objections to the use of UAVs in principle. I examine several such objections currently on offer and show them all to be wanting. Indeed, I argue that we have a duty to protect an (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   56 citations  
  • Interdisciplinary Confusion and Resolution in the Context of Moral Machines.Jakob Stenseke - 2022 - Science and Engineering Ethics 28 (3):1-17.
    Recent advancements in artificial intelligence have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore ways to (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2023 - AI and Society 38 (4):1301-1320.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Humanist and Nonhumanist Aspects of Technologies as Problem Solving Physical Instruments.Sadjad Soltanzadeh - 2015 - Philosophy and Technology 28 (1):139-156.
    A form of metaphysical humanism in the field of philosophy of technology can be defined as the claim that besides technologies’ physical aspects, purely human attributes are sufficient to conceptualize technologies. Metaphysical nonhumanism, on the other hand, would be the claim that the meanings of the operative words in any acceptable conception of technologies refer to the states of affairs or events which are in a way or another shaped by technologies. In this paper, I focus on the conception of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Statistically responsible artificial intelligences.Smith Nicholas & Darby Vickers - 2021 - Ethics and Information Technology 23 (3):483-493.
    As artificial intelligence becomes ubiquitous, it will be increasingly involved in novel, morally significant situations. Thus, understanding what it means for a machine to be morally responsible is important for machine ethics. Any method for ascribing moral responsibility to AI must be intelligible and intuitive to the humans who interact with it. We argue that the appropriate approach is to determine how AIs might fare on a standard account of human moral responsibility: a Strawsonian account. We make no claim that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • A neo-aristotelian perspective on the need for artificial moral agents (AMAs).Alejo José G. Sison & Dulce M. Redín - 2023 - AI and Society 38 (1):47-65.
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Are we done with (Wordy) manifestos? Towards an introverted digital humanism.Giacomo Pezzano - 2024 - Journal of Responsible Technology 17 (C):100078.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Cracking down on autonomy: three challenges to design in IT Law. [REVIEW]U. Pagallo - 2012 - Ethics and Information Technology 14 (4):319-328.
    The paper examines how technology challenges conventional borders of national legal systems, as shown by cases that scholars address as a part of their everyday work in the fields of information technology (IT)-Law, i.e., computer crimes, data protection, digital copyright, and so forth. Information on the internet has in fact a ubiquitous nature that transcends political borders and questions the notion of the law as made of commands enforced through physical sanctions. Whereas many of today’s impasses on jurisdiction, international conflicts (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Can a Robot Be a Good Colleague?Sven Nyholm & Jilles Smids - 2020 - Science and Engineering Ethics 26 (4):2169-2188.
    This paper discusses the robotization of the workplace, and particularly the question of whether robots can be good colleagues. This might appear to be a strange question at first glance, but it is worth asking for two reasons. Firstly, some people already treat robots they work alongside as if the robots are valuable colleagues. It is worth reflecting on whether such people are making a mistake. Secondly, having good colleagues is widely regarded as a key aspect of what can make (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Should criminal law protect love relation with robots?Kamil Mamak - forthcoming - AI and Society:1-10.
    Whether or not we call a love-like relationship with robots true love, some people may feel and claim that, for them, it is a sufficient substitute for love relationship. The love relationship between humans has a special place in our social life. On the grounds of both morality and law, our significant other can expect special treatment. It is understandable that, precisely because of this kind of relationship, we save our significant other instead of others or will not testify against (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Humans, Neanderthals, robots and rights.Kamil Mamak - 2022 - Ethics and Information Technology 24 (3):1-9.
    Robots are becoming more visible parts of our life, a situation which prompts questions about their place in our society. One group of issues that is widely discussed is connected with robots’ moral and legal status as well as their potential rights. The question of granting robots rights is polarizing. Some positions accept the possibility of granting them human rights whereas others reject the notion that robots can be considered potential rights holders. In this paper, I claim that robots will (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Computationally rational agents can be moral agents.Bongani Andy Mabaso - 2020 - Ethics and Information Technology 23 (2):137-145.
    In this article, a concise argument for computational rationality as a basis for artificial moral agency is advanced. Some ethicists have long argued that rational agents can become artificial moral agents. However, most of their views have come from purely philosophical perspectives, thus making it difficult to transfer their arguments to a scientific and analytical frame of reference. The result has been a disintegrated approach to the conceptualisation and design of artificial moral agents. In this article, I make the argument (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial Moral Agents Within an Ethos of AI4SG.Bongani Andy Mabaso - 2020 - Philosophy and Technology 34 (1):7-21.
    As artificial intelligence (AI) continues to proliferate into every area of modern life, there is no doubt that society has to think deeply about the potential impact, whether negative or positive, that it will have. Whilst scholars recognise that AI can usher in a new era of personal, social and economic prosperity, they also warn of the potential for it to be misused towards the detriment of society. Deliberate strategies are therefore required to ensure that AI can be safely integrated (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Merging Minds: The Conceptual and Ethical Impacts of Emerging Technologies for Collective Minds.David M. Lyreskog, Hazem Zohny, Julian Savulescu & Ilina Singh - 2023 - Neuroethics 16 (1):1-17.
    A growing number of technologies are currently being developed to improve and distribute thinking and decision-making. Rapid progress in brain-to-brain interfacing and swarming technologies promises to transform how we think about collective and collaborative cognitive tasks across domains, ranging from research to entertainment, and from therapeutics to military applications. As these tools continue to improve, we are prompted to monitor how they may affect our society on a broader level, but also how they may reshape our fundamental understanding of agency, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial agents among us: Should we recognize them as agents proper?Migle Laukyte - 2017 - Ethics and Information Technology 19 (1):1-17.
    In this paper, I discuss whether in a society where the use of artificial agents is pervasive, these agents should be recognized as having rights like those we accord to group agents. This kind of recognition I understand to be at once social and legal, and I argue that in order for an artificial agent to be so recognized, it will need to meet the same basic conditions in light of which group agents are granted such recognition. I then explore (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?Francisco Lara - 2021 - Science and Engineering Ethics 27 (4):1-27.
    Can Artificial Intelligence be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Instrumental Robots.Sebastian Köhler - 2020 - Science and Engineering Ethics 26 (6):3121-3141.
    Advances in artificial intelligence research allow us to build fairly sophisticated agents: robots and computer programs capable of acting and deciding on their own. These systems raise questions about who is responsible when something goes wrong—when such systems harm or kill humans. In a recent paper, Sven Nyholm has suggested that, because current AI will likely possess what we might call “supervised agency”, the theory of responsibility for individual agency is the wrong place to look for an answer to the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Do robots dream of escaping? Narrativity and ethics in Alex Garland’s Ex-Machina and Luke Scott’s Morgan.Inbar Kaminsky - 2021 - AI and Society 36 (1):349-359.
    Ex-Machina and Morgan, two recent science-fiction films that deal with the creation of humanoids, also explored the relationship between artificial intelligence, spatiality and the lingering question mark regarding artificial consciousness. In both narratives, the creators of the humanoids have tried to mimic human consciousness as closely as possible, which has resulted in the imprisonment of the humanoids due to proprietary concerns in Ex-Machina and due to the violent behavior of the humanoid in Morgan. This article addresses the dilemma of whether (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Moral difference between humans and robots: paternalism and human-relative reason.Tsung-Hsing Ho - 2022 - AI and Society 37 (4):1533-1543.
    According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the _equivalence thesis_). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  • Responsible AI Through Conceptual Engineering.Johannes Himmelreich & Sebastian Köhler - 2022 - Philosophy and Technology 35 (3):1-30.
    The advent of intelligent artificial systems has sparked a dispute about the question of who is responsible when such a system causes a harmful outcome. This paper champions the idea that this dispute should be approached as a conceptual engineering problem. Towards this claim, the paper first argues that the dispute about the responsibility gap problem is in part a conceptual dispute about the content of responsibility and related concepts. The paper then argues that the way forward is to evaluate (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Preserving a combat commander’s moral agency: The Vincennes Incident as a Chinese Room.Patrick Chisan Hew - 2016 - Ethics and Information Technology 18 (3):227-235.
    We argue that a command and control system can undermine a commander’s moral agency if it causes him/her to process information in a purely syntactic manner, or if it precludes him/her from ascertaining the truth of that information. Our case is based on the resemblance between a commander’s circumstances and the protagonist in Searle’s Chinese Room, together with a careful reading of Aristotle’s notions of ‘compulsory’ and ‘ignorance’. We further substantiate our case by considering the Vincennes Incident, when the crew (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  • Distributed cognition and distributed morality: Agency, artifacts and systems.Richard Heersmink - 2017 - Science and Engineering Ethics 23 (2):431-448.
    There are various philosophical approaches and theories describing the intimate relation people have to artifacts. In this paper, I explore the relation between two such theories, namely distributed cognition and distributed morality theory. I point out a number of similarities and differences in these views regarding the ontological status they attribute to artifacts and the larger systems they are part of. Having evaluated and compared these views, I continue by focussing on the way cognitive artifacts are used in moral practice. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  • Moral Responsibility of Robots and Hybrid Agents.Raul Hakli & Pekka Mäkelä - 2019 - The Monist 102 (2):259-275.
    We study whether robots can satisfy the conditions of an agent fit to be held morally responsible, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. We employ Mele’s history-sensitive account of autonomy and responsibility to argue that even if robots were to have all the capacities required of moral agency, their history would deprive them from autonomy in a responsibility-undermining way. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  • Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
    The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I take (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  • Towards a new scale for assessing attitudes towards social robots : The attitudes towards social robots scale.Malene Flensborg Damholdt, Christina Vestergaard, Marco Nørskov, Raul Hakli, Stefan Larsen & Johanna Seibt - 2020 - Interaction Studies 21 (1):24-56.
    Background: The surge in the development of social robots gives rise to an increased need for systematic methods of assessing attitudes towards robots. Aim: This study presents the development of a questionnaire for assessing attitudinal stance towards social robots: the ASOR. Methods: The 37-item ASOR questionnaire was developed by a task-force with members from different disciplines. It was founded on theoretical considerations of how social robots could influence five different aspects of relatedness. Results: Three hundred thirty-nine people responded to the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • Towards a new scale for assessing attitudes towards social robots.Malene Flensborg Damholdt, Christina Vestergaard, Marco Nørskov, Raul Hakli, Stefan Larsen & Johanna Seibt - 2020 - Interaction Studies 21 (1):24-56.
    Background: The surge in the development of social robots gives rise to an increased need for systematic methods of assessing attitudes towards robots. Aim: This study presents the development of a questionnaire for assessing attitudinal stance towards social robots: the ASOR. Methods: The 37-item ASOR questionnaire was developed by a task-force with members from different disciplines. It was founded on theoretical considerations of how social robots could influence five different aspects of relatedness. Results: Three hundred thirty-nine people responded to the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • How to cross boundaries in the information society: vulnerability, responsiveness, and accountability.Massimo Durante - 2013 - Acm Sigcas Computers and Society 43 (1):9-21.
    The paper examines how the current evolution and growth of ICTs enables a greater number of individuals to communicate and interact with each other on a larger scale: this phenomenon enables people to cross the conventional boundaries set up across modernity. The presence of diverse barriers does not however disappear, and we therefore still experience cultural, political, legal and moral boundaries in the globalised Information Society. The paper suggests that the issue of boundaries is to be understood, primarily, in philosophical (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Understanding responsibility in Responsible AI. Dianoetic virtues and the hard problem of context.Mihaela Constantinescu, Cristina Voinea, Radu Uszkai & Constantin Vică - 2021 - Ethics and Information Technology 23 (4):803-814.
    During the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding role for (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2022 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We encode (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations