Switch to: References

Add citations

You must login to add citations.
  1. Robots: ethical by design.Gordana Dodig Crnkovic & Baran Çürüklü - 2012 - Ethics and Information Technology 14 (1):61-71.
    Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • Artificial Intelligence and Agency: Tie-breaking in AI Decision-Making.Danielle Swanepoel & Daniel Corks - 2024 - Science and Engineering Ethics 30 (2):1-16.
    Determining the agency-status of machines and AI has never been more pressing. As we progress into a future where humans and machines more closely co-exist, understanding hallmark features of agency affords us the ability to develop policy and narratives which cater to both humans and machines. This paper maintains that decision-making processes largely underpin agential action, and that in most instances, these processes yield good results in terms of making good choices. However, in some instances, when faced with two (or (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Embedding Values in Artificial Intelligence (AI) Systems.Ibo van de Poel - 2020 - Minds and Machines 30 (3):385-409.
    Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   37 citations  
  • The Morality of Artificial Friends in Ishiguro’s Klara and the Sun.Jakob Stenseke - 2022 - Journal of Science Fiction and Philosophy 5.
    Can artificial entities be worthy of moral considerations? Can they be artificial moral agents (AMAs), capable of telling the difference between good and evil? In this essay, I explore both questions—i.e., whether and to what extent artificial entities can have a moral status (“the machine question”) and moral agency (“the AMA question”)—in light of Kazuo Ishiguro’s 2021 novel Klara and the Sun. I do so by juxtaposing two prominent approaches to machine morality that are central to the novel: the (1) (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moral sensitivity and the limits of artificial moral agents.Joris Graff - 2024 - Ethics and Information Technology 26 (1):1-12.
    Machine ethics is the field that strives to develop ‘artificial moral agents’ (AMAs), artificial systems that can autonomously make moral decisions. Some authors have questioned the feasibility of machine ethics, by questioning whether artificial systems can possess moral competence, or the capacity to reach morally right decisions in various situations. This paper explores this question by drawing on the work of several moral philosophers (McDowell, Wiggins, Hampshire, and Nussbaum) who have characterised moral competence in a manner inspired by Aristotle. Although (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be appropriately ascribed (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • The Moral Addressor Account of Moral Agency.Dorna Behdadi - manuscript
    According to the practice-focused approach to moral agency, a participant stance towards an entity is warranted by the extent to which this entity qualifies as an apt target of ascriptions of moral responsibility, such as blame. Entities who are not eligible for such reactions are exempted from moral responsibility practices, and thus denied moral agency. I claim that many typically exempted cases may qualify as moral agents by being eligible for a distinct participant stance. When we participate in moral responsibility (...)
     
    Export citation  
     
    Bookmark  
  • In AI We Trust: Ethics, Artificial Intelligence, and Reliability.Mark Ryan - 2020 - Science and Engineering Ethics 26 (5):2749-2767.
    One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   41 citations  
  • Cooperation with Animals? What Is and What Is Not.Federico Zuolo - 2020 - Journal of Agricultural and Environmental Ethics 33 (2):315-335.
    The idea of cooperation has been recently used with regard to human–animal relations to justify the application of an associative theory of justice to animals. In this paper, I discuss some of these proposals and seek to provide a reformulation of the idea of cooperation suitable to human–animal relations. The standard idea of cooperation, indeed, presupposes mental capacities that probably cannot be found in animals. I try to disentangle the idea of cooperation from other cognate notions and distinguish it from (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Editors’ Overview: Moral Responsibility in Technology and Engineering.Ibo van de Poel, Jessica Fahlquist, Neelke Doorn, Sjoerd Zwart & Lambèr Royakkers - 2012 - Science and Engineering Ethics 18 (1):1-11.
    In some situations in which undesirable collective effects occur, it is very hard, if not impossible, to hold any individual reasonably responsible. Such a situation may be referred to as the problem of many hands. In this paper we investigate how the problem of many hands can best be understood and why, and when, it exactly constitutes a problem. After analyzing climate change as an example, we propose to define the problem of many hands as the occurrence of a gap (...)
    Direct download (13 more)  
     
    Export citation  
     
    Bookmark   37 citations  
  • A challenge for machine ethics.Ryan Tonkens - 2009 - Minds and Machines 19 (3):421-438.
    That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is both (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   38 citations  
  • Do Others Mind? Moral Agents Without Mental States.Fabio Tollon - 2021 - South African Journal of Philosophy 40 (2):182-194.
    As technology advances and artificial agents (AAs) become increasingly autonomous, start to embody morally relevant values and act on those values, there arises the issue of whether these entities should be considered artificial moral agents (AMAs). There are two main ways in which one could argue for AMA: using intentional criteria or using functional criteria. In this article, I provide an exposition and critique of “intentional” accounts of AMA. These accounts claim that moral agency should only be accorded to entities (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Levels of Trust in the Context of Machine Ethics.Herman T. Tavani - 2015 - Philosophy and Technology 28 (1):75-90.
    Are trust relationships involving humans and artificial agents possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani :39–51, 2011), I argue that the “short answer” to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents and AAs. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Can we Develop Artificial Agents Capable of Making Good Moral Decisions?: Wendell Wallach and Colin Allen: Moral Machines: Teaching Robots Right from Wrong, Oxford University Press, 2009, xi + 273 pp, ISBN: 978-0-19-537404-9.Herman T. Tavani - 2011 - Minds and Machines 21 (3):465-474.
  • Moral Judgments in the Age of Artificial Intelligence.Yulia W. Sullivan & Samuel Fosso Wamba - 2022 - Journal of Business Ethics 178 (4):917-943.
    The current research aims to answer the following question: “who will be held responsible for harm involving an artificial intelligence system?” Drawing upon the literature on moral judgments, we assert that when people perceive an AI system’s action as causing harm to others, they will assign blame to different entity groups involved in an AI’s life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing upon the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency.Bernd Carsten Stahl - 2006 - Ethics and Information Technology 8 (4):205-213.
    There has been much debate whether computers can be responsible. This question is usually discussed in terms of personhood and personal characteristics, which a computer may or may not possess. If a computer fulfils the conditions required for agency or personhood, then it can be responsible; otherwise not. This paper suggests a different approach. An analysis of the concept of responsibility shows that it is a social construct of ascription which is only viable in certain social contexts and which serves (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  • Lockbox: mobility, privacy and values in cloud storage. [REVIEW]Luke Stark & Matt Tierney - 2014 - Ethics and Information Technology 16 (1):1-13.
    This paper examines one particular problem of values in cloud computing: how individuals can take advantage of the cloud to store data without compromising their privacy and autonomy. Through the creation of Lockbox, an encrypted cloud storage application, we explore how designers can use reflection in designing for human values to maintain both privacy and usability in the cloud.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Statistically responsible artificial intelligences.Smith Nicholas & Darby Vickers - 2021 - Ethics and Information Technology 23 (3):483-493.
    As artificial intelligence becomes ubiquitous, it will be increasingly involved in novel, morally significant situations. Thus, understanding what it means for a machine to be morally responsible is important for machine ethics. Any method for ascribing moral responsibility to AI must be intelligible and intuitive to the humans who interact with it. We argue that the appropriate approach is to determine how AIs might fare on a standard account of human moral responsibility: a Strawsonian account. We make no claim that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Can we program or train robots to be good?Amanda Sharkey - 2020 - Ethics and Information Technology 22 (4):283-295.
    As robots are deployed in a widening range of situations, it is necessary to develop a clearer position about whether or not they can be trusted to make good moral decisions. In this paper, we take a realistic look at recent attempts to program and to train robots to develop some form of moral competence. Examples of implemented robot behaviours that have been described as 'ethical', or 'minimally ethical' are considered, although they are found to only operate in quite constrained (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  • Confronting the Moral Dimensions of Technology Through Mediation Theory.Evan Selinger - 2014 - Philosophy and Technology 27 (2):287-313.
    Playing Philosophical Pictionary with VerbeekMartin Heidegger famously claimed that great thinkers spend their lives exploring a single thought: its history nuances, misappropriations, and implications. While not as narrowly—or, in my opinion, myopically—focused, most contemporary principals in the philosophy of technology pursue recognizable research programs. Since these programs are distinctive, peers and graduate students can associate complex arguments with leading concepts. Such concepts circulate widely enough to become common terms in database searches, and informatics scholars in principle can use them as (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Misdirected Principle with a Catch: Explicability for AI.Scott Robbins - 2019 - Minds and Machines 29 (4):495-514.
    There is widespread agreement that there should be a principle requiring that artificial intelligence be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” :689–707, 2018). There (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   34 citations  
  • AI and the path to envelopment: knowledge as a first step towards the responsible regulation and use of AI-powered machines.Scott Robbins - 2020 - AI and Society 35 (2):391-400.
    With Artificial Intelligence entering our lives in novel ways—both known and unknown to us—there is both the enhancement of existing ethical issues associated with AI as well as the rise of new ethical issues. There is much focus on opening up the ‘black box’ of modern machine-learning algorithms to understand the reasoning behind their decisions—especially morally salient decisions. However, some applications of AI which are no doubt beneficial to society rely upon these black boxes. Rather than requiring algorithms to be (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • On the Moral Agency of Computers.Thomas M. Powers - 2013 - Topoi 32 (2):227-236.
    Can computer systems ever be considered moral agents? This paper considers two factors that are explored in the recent philosophical literature. First, there are the important domains in which computers are allowed to act, made possible by their greater functional capacities. Second, there is the claim that these functional capacities appear to embody relevant human abilities, such as autonomy and responsibility. I argue that neither the first (Domain-Function) factor nor the second (Simulacrum) factor gets at the central issue in the (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  • Human Goals Are Constitutive of Agency in Artificial Intelligence.Elena Popa - 2021 - Philosophy and Technology 34 (4):1731-1750.
    The question whether AI systems have agency is gaining increasing importance in discussions of responsibility for AI behavior. This paper argues that an approach to artificial agency needs to be teleological, and consider the role of human goals in particular if it is to adequately address the issue of responsibility. I will defend the view that while AI systems can be viewed as autonomous in the sense of identifying or pursuing goals, they rely on human goals and other values incorporated (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Responsibility Practices and Unmanned Military Technologies.Merel Noorman - 2014 - Science and Engineering Ethics 20 (3):809-826.
    The prospect of increasingly autonomous military robots has raised concerns about the obfuscation of human responsibility. This papers argues that whether or not and to what extent human actors are and will be considered to be responsible for the behavior of robotic systems is and will be the outcome of ongoing negotiations between the various human actors involved. These negotiations are about what technologies should do and mean, but they are also about how responsibility should be interpreted and how it (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Negotiating autonomy and responsibility in military robots.Merel Noorman & Deborah G. Johnson - 2014 - Ethics and Information Technology 16 (1):51-62.
    Central to the ethical concerns raised by the prospect of increasingly autonomous military robots are issues of responsibility. In this paper we examine different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots. We argue that due to the metaphorical use of the concept of autonomy, the autonomy of robots is often treated as a black box in discussions about autonomous military robots. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Philosophical Inquiry into Computer Intentionality: Machine Learning and Value Sensitive Design.Dmytro Mykhailov - 2023 - Human Affairs 33 (1):115-127.
    Intelligent algorithms together with various machine learning techniques hold a dominant position among major challenges for contemporary value sensitive design. Self-learning capabilities of current AI applications blur the causal link between programmer and computer behavior. This creates a vital challenge for the design, development and implementation of digital technologies nowadays. This paper seeks to provide an account of this challenge. The main question that shapes the current analysis is the following: What conceptual tools can be developed within the value sensitive (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • A moral analysis of intelligent decision-support systems in diagnostics through the lens of Luciano Floridi’s information ethics.Dmytro Mykhailov - 2021 - Human Affairs 31 (2):149-164.
    Contemporary medical diagnostics has a dynamic moral landscape, which includes a variety of agents, factors, and components. A significant part of this landscape is composed of information technologies that play a vital role in doctors’ decision-making. This paper focuses on the so-called Intelligent Decision-Support System that is widely implemented in the domain of contemporary medical diagnosis. The purpose of this article is twofold. First, I will show that the IDSS may be considered a moral agent in the practice of medicine (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • A Study of Technological Intentionality in C++ and Generative Adversarial Model: Phenomenological and Postphenomenological Perspectives.Dmytro Mykhailov & Nicola Liberati - 2023 - Foundations of Science 28 (3):841-857.
    This paper aims to highlight the life of computer technologies to understand what kind of ‘technological intentionality’ is present in computers based upon the phenomenological elements constituting the objects in general. Such a study can better explain the effects of new digital technologies on our society and highlight the role of digital technologies by focusing on their activities. Even if Husserlian phenomenology rarely talks about technologies, some of its aspects can be used to address the actions performed by the digital (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • The autonomy-safety-paradox of service robotics in Europe and Japan: a comparative analysis.Hironori Matsuzaki & Gesa Lindemann - 2016 - AI and Society 31 (4):501-517.
  • Kantian Moral Agency and the Ethics of Artificial Intelligence.Riya Manna & Rajakishore Nath - 2021 - Problemos 100:139-151.
    This paper discusses the philosophical issues pertaining to Kantian moral agency and artificial intelligence. Here, our objective is to offer a comprehensive analysis of Kantian ethics to elucidate the non-feasibility of Kantian machines. Meanwhile, the possibility of Kantian machines seems to contend with the genuine human Kantian agency. We argue that in machine morality, ‘duty’ should be performed with ‘freedom of will’ and ‘happiness’ because Kant narrated the human tendency of evaluating our ‘natural necessity’ through ‘happiness’ as the end. Lastly, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Computationally rational agents can be moral agents.Bongani Andy Mabaso - 2020 - Ethics and Information Technology 23 (2):137-145.
    In this article, a concise argument for computational rationality as a basis for artificial moral agency is advanced. Some ethicists have long argued that rational agents can become artificial moral agents. However, most of their views have come from purely philosophical perspectives, thus making it difficult to transfer their arguments to a scientific and analytical frame of reference. The result has been a disintegrated approach to the conceptualisation and design of artificial moral agents. In this article, I make the argument (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial Moral Agents Within an Ethos of AI4SG.Bongani Andy Mabaso - 2020 - Philosophy and Technology 34 (1):7-21.
    As artificial intelligence (AI) continues to proliferate into every area of modern life, there is no doubt that society has to think deeply about the potential impact, whether negative or positive, that it will have. Whilst scholars recognise that AI can usher in a new era of personal, social and economic prosperity, they also warn of the potential for it to be misused towards the detriment of society. Deliberate strategies are therefore required to ensure that AI can be safely integrated (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Panopticon reaches within: how digital technology turns us inside out. [REVIEW]Ann Light - 2010 - Identity in the Information Society 3 (3):583-598.
    The convergence of biomedical and information technology holds the potential to alter the discourses of identity, or as is argued here, to turn us inside out. The advent of digital networks makes it possible to ‘see inside’ people in ways not anticipated and thus create new performance arenas for the expression of identity. Drawing on the ideas of Butler and Foucault and theories of performativity, this paper examines a new context for human-computer interaction and articulates potentially disturbing issues with monitoring (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark  
  • Philosophical evaluation of the conceptualisation of trust in the NHS’ Code of Conduct for artificial intelligence-driven technology.Soogeun Samuel Lee - 2022 - Journal of Medical Ethics 48 (4):272-277.
    The UK Government’s Code of Conduct for data-driven health and care technologies, specifically artificial intelligence -driven technologies, comprises 10 principles that outline a gold-standard of ethical conduct for AI developers and implementers within the National Health Service. Considering the importance of trust in medicine, in this essay I aim to evaluate the conceptualisation of trust within this piece of ethical governance. I examine the Code of Conduct, specifically Principle 7, and extract two positions: a principle of rationally justified trust that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Philosophical evaluation of the conceptualisation of trust in the NHS Code of Conduct for artificial intelligence-driven technology.Soogeun Samuel Lee - 2022 - Journal of Medical Ethics Recent Issues 48 (4):272-277.
    The UK Government’s Code of Conduct for data-driven health and care technologies, specifically artificial intelligence -driven technologies, comprises 10 principles that outline a gold-standard of ethical conduct for AI developers and implementers within the National Health Service. Considering the importance of trust in medicine, in this essay I aim to evaluate the conceptualisation of trust within this piece of ethical governance. I examine the Code of Conduct, specifically Principle 7, and extract two positions: a principle of rationally justified trust that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • On the moral permissibility of robot apologies.Makoto Kureha - forthcoming - AI and Society:1-11.
    Robots that incorporate the function of apologizing have emerged in recent years. This paper examines the moral permissibility of making robots apologize. First, I characterize the nature of apology based on analyses conducted in multiple scholarly domains. Next, I present a prima facie argument that robot apologies are not permissible because they may harm human societies by inducing the misattribution of responsibility. Subsequently, I respond to a possible response to the prima facie objection based on the interpretation that attributing responsibility (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Technology with No Human Responsibility?Deborah G. Johnson - 2015 - Journal of Business Ethics 127 (4):707-715.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   36 citations  
  • Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  • On the moral responsibility of military robots.Thomas Hellström - 2013 - Ethics and Information Technology 15 (2):99-107.
    This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing autonomous power, we should be (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   33 citations  
  • Distributed cognition and distributed morality: Agency, artifacts and systems.Richard Heersmink - 2017 - Science and Engineering Ethics 23 (2):431-448.
    There are various philosophical approaches and theories describing the intimate relation people have to artifacts. In this paper, I explore the relation between two such theories, namely distributed cognition and distributed morality theory. I point out a number of similarities and differences in these views regarding the ontological status they attribute to artifacts and the larger systems they are part of. Having evaluated and compared these views, I continue by focussing on the way cognitive artifacts are used in moral practice. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  • The anachronism of moral individualism and the responsibility of extended agency.F. Allan Hanson - 2008 - Phenomenology and the Cognitive Sciences 7 (3):415-424.
    Recent social theory has departed from methodological individualism’s explanation of action according to the motives and dispositions of human individuals in favor of explanation in terms of broader agencies consisting of both human and nonhuman elements described as cyborgs, actor-networks, extended agencies, or distributed cognition. This paper proposes that moral responsibility for action also be vested in extended agencies. It advances a consequentialist view of responsibility that takes moral responsibility to be a species of causal responsibility, and it answers objections (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Beyond the skin bag: On the moral responsibility of extended agencies.F. Allan Hanson - 2009 - Ethics and Information Technology 11 (1):91-99.
    The growing prominence of computers in contemporary life, often seemingly with minds of their own, invites rethinking the question of moral responsibility. If the moral responsibility for an act lies with the subject that carried it out, it follows that different concepts of the subject generate different views of moral responsibility. Some recent theorists have argued that actions are produced by composite, fluid subjects understood as extended agencies (cyborgs, actor networks). This view of the subject contrasts with methodological individualism: the (...)
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • Moral Responsibility of Robots and Hybrid Agents.Raul Hakli & Pekka Mäkelä - 2019 - The Monist 102 (2):259-275.
    We study whether robots can satisfy the conditions of an agent fit to be held morally responsible, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. We employ Mele’s history-sensitive account of autonomy and responsibility to argue that even if robots were to have all the capacities required of moral agency, their history would deprive them from autonomy in a responsibility-undermining way. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  • The other question: can and should robots have rights?David J. Gunkel - 2018 - Ethics and Information Technology 20 (2):87-99.
    This essay addresses the other side of the robot ethics debate, taking up and investigating the question “Can and should robots have rights?” The examination of this subject proceeds by way of three steps or movements. We begin by looking at and analyzing the form of the question itself. There is an important philosophical difference between the two modal verbs that organize the inquiry—can and should. This difference has considerable history behind it that influences what is asked about and how. (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   55 citations  
  • Mind the gap: responsible robotics and the problem of responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   37 citations  
  • Duty Now and for the Future: Communication, Ethics and Artificial Intelligence.David J. Gunkel - 2023 - Journal of Media Ethics 38 (4):198-210.
    This essay examines whether and to what extent the “other” in communicative interactions may be otherwise than another human subject and the moral opportunities and challenges this alteration would make available to us. Toward this end, the analysis proceeds in five steps or movements. The first reviews the way the discipline of communication has typically perceived and theorized the role and function of technology. The second and third parts investigate the critical challenges that emerging technology, such as artificial intelligence applications (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • What do we owe to intelligent robots?John-Stewart Gordon - 2020 - AI and Society 35 (1):209-223.
    Great technological advances in such areas as computer science, artificial intelligence, and robotics have brought the advent of artificially intelligent robots within our reach within the next century. Against this background, the interdisciplinary field of machine ethics is concerned with the vital issue of making robots “ethical” and examining the moral status of autonomous robots that are capable of moral reasoning and decision-making. The existence of such robots will deeply reshape our socio-political life. This paper focuses on whether such highly (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  • Towards the Epistemology of the Internet of Things Techno-Epistemology and Ethical Considerations Through the Prism of Trust.Ori Freiman - 2014 - International Review of Information Ethics 22:6-22.
    This paper discusses the epistemology of the Internet of Things [IoT] by focusing on the topic of trust. It presents various frameworks of trust, and argues that the ethical framework of trust is what constitutes our responsibility to reveal desired norms and standards and embed them in other frameworks of trust. The first section briefly presents the IoT and scrutinizes the scarce philosophical work that has been done on this subject so far. The second section suggests that the field of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Mathematics, ethics and purism: an application of MacIntyre’s virtue theory.Paul Ernest - 2020 - Synthese 199 (1-2):3137-3167.
    A traditional problem of ethics in mathematics is the denial of social responsibility. Pure mathematics is viewed as neutral and value free, and therefore free of ethical responsibility. Applications of mathematics are seen as employing a neutral set of tools which, of themselves, are free from social responsibility. However, mathematicians are convinced they know what constitutes good mathematics. Furthermore many pure mathematicians are committed to purism, the ideology that values purity above applications in mathematics, and some historical reasons for this (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations