Switch to: References

Citations of:

Killer robots

Journal of Applied Philosophy 24 (1):62–77 (2007)

Add citations

You must login to add citations.
  1. Hiring, Algorithms, and Choice: Why Interviews Still Matter.Vikram R. Bhargava & Pooria Assadi - 2024 - Business Ethics Quarterly 34 (2):201-230.
    Why do organizations conduct job interviews? The traditional view of interviewing holds that interviews are conducted, despite their steep costs, to predict a candidate’s future performance and fit. This view faces a twofold threat: the behavioral and algorithmic threats. Specifically, an overwhelming body of behavioral research suggests that we are bad at predicting performance and fit; furthermore, algorithms are already better than us at making these predictions in various domains. If the traditional view captures the whole story, then interviews seem (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Computers Are Syntax All the Way Down: Reply to Bozşahin.William J. Rapaport - 2019 - Minds and Machines 29 (2):227-237.
    A response to a recent critique by Cem Bozşahin of the theory of syntactic semantics as it applies to Helen Keller, and some applications of the theory to the philosophy of computer science.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Robots of Just War: A Legal Perspective.Ugo Pagallo - 2011 - Philosophy and Technology 24 (3):307-323.
    In order to present a hopefully comprehensive framework of what is the stake of the growing use of robot soldiers, the paper focuses on: the different impact of robots on legal systems, e.g., contractual obligations and tort liability; how robots affect crucial notions as causality, predictability and human culpability in criminal law and, finally, specific hypotheses of robots employed in “just wars.” By using the traditional distinction between causes that make wars just and conduct admissible on the battlefield, the aim (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Robots as Weapons in Just Wars.Marcus Schulzke - 2011 - Philosophy and Technology 24 (3):293-306.
    This essay analyzes the use of military robots in terms of the jus in bello concepts of discrimination and proportionality. It argues that while robots may make mistakes, they do not suffer from most of the impairments that interfere with human judgment on the battlefield. Although robots are imperfect weapons, they can exercise as much restraint as human soldiers, if not more. Robots can be used in a way that is consistent with just war theory when they are programmed to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Mind the gaps: Assuring the safety of autonomous systems from an engineering, ethical, and legal perspective.Simon Burton, Ibrahim Habli, Tom Lawton, John McDermid, Phillip Morgan & Zoe Porter - 2020 - Artificial Intelligence 279 (C):103201.
  • Risk and Responsibility in Context.Adriana Placani & Stearns Broadhead (eds.) - 2023 - New York: Routledge.
    This volume bridges contemporary philosophical conceptions of risk and responsibility and offers an extensive examination of the topic. It shows that risk and responsibility combine in ways that give rise to new philosophical questions and problems. Philosophical interest in the relationship between risk and responsibility continues to rise, due in no small part due to environmental crises, emerging technologies, legal developments, and new medical advances. Despite such interest, scholars are just now working out how to conceive of the links between (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Realising Meaningful Human Control Over Automated Driving Systems: A Multidisciplinary Approach.Filippo Santoni de Sio, Giulio Mecacci, Simeon Calvert, Daniel Heikoop, Marjan Hagenzieker & Bart van Arem - 2023 - Minds and Machines 33 (4):587-611.
    The paper presents a framework to realise “meaningful human control” over Automated Driving Systems. The framework is based on an original synthesis of the results of the multidisciplinary research project “Meaningful Human Control over Automated Driving Systems” lead by a team of engineers, philosophers, and psychologists at Delft University of the Technology from 2017 to 2021. Meaningful human control aims at protecting safety and reducing responsibility gaps. The framework is based on the core assumption that human persons and institutions, not (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Strictly Human: Limitations of Autonomous Systems.Sadjad Soltanzadeh - 2022 - Minds and Machines 32 (2):269-288.
    Can autonomous systems replace humans in the performance of their activities? How does the answer to this question inform the design of autonomous systems? The study of technical systems and their features should be preceded by the study of the activities in which they play roles. Each activity can be described by its overall goals, governing norms and the intermediate steps which are taken to achieve the goals and to follow the norms. This paper uses the activity realist approach to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Philosophy of AI: A structured overview.Vincent C. Müller - 2024 - In Nathalie A. Smuha (ed.), Cambridge handbook on the law, ethics and policy of Artificial Intelligence. Cambridge University Press. pp. 1-25.
    This paper presents the main topics, arguments, and positions in the philosophy of AI at present (excluding ethics). Apart from the basic concepts of intelligence and computation, the main topics of ar-tificial cognition are perception, action, meaning, rational choice, free will, consciousness, and normativity. Through a better understanding of these topics, the philosophy of AI contributes to our understand-ing of the nature, prospects, and value of AI. Furthermore, these topics can be understood more deeply through the discussion of AI; so (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • On the Matter of Robot Minds.Brian P. McLaughlin & David Rose - forthcoming - Oxford Studies in Experimental Philosophy.
    The view that phenomenally conscious robots are on the horizon often rests on a certain philosophical view about consciousness, one we call “nomological behaviorism.” The view entails that, as a matter of nomological necessity, if a robot had exactly the same patterns of dispositions to peripheral behavior as a phenomenally conscious being, then the robot would be phenomenally conscious; indeed it would have all and only the states of phenomenal consciousness that the phenomenally conscious being in question has. We experimentally (...)
    Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Artificial agents and the expanding ethical circle.Steve Torrance - 2013 - AI and Society 28 (4):399-414.
    I discuss the realizability and the ethical ramifications of Machine Ethics, from a number of different perspectives: I label these the anthropocentric, infocentric, biocentric and ecocentric perspectives. Each of these approaches takes a characteristic view of the position of humanity relative to other aspects of the designed and the natural worlds—or relative to the possibilities of ‘extra-human’ extensions to the ethical community. In the course of the discussion, a number of key issues emerge concerning the relation between technology and ethics, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • The Morality of Artificial Friends in Ishiguro’s Klara and the Sun.Jakob Stenseke - 2022 - Journal of Science Fiction and Philosophy 5.
    Can artificial entities be worthy of moral considerations? Can they be artificial moral agents (AMAs), capable of telling the difference between good and evil? In this essay, I explore both questions—i.e., whether and to what extent artificial entities can have a moral status (“the machine question”) and moral agency (“the AMA question”)—in light of Kazuo Ishiguro’s 2021 novel Klara and the Sun. I do so by juxtaposing two prominent approaches to machine morality that are central to the novel: the (1) (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.
    Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a wideningresponsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial moral (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2012 - In Peter Adamson (ed.), Stanford Encyclopedia of Philosophy. Stanford Encyclopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  • How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an existential (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Robots, Autonomy, and Responsibility.Raul Hakli & Pekka Mäkelä - 2016 - In Johanna Seibt, Marco Nørskov & Søren Schack Andersen (eds.), What Social Robots Can and Should Do: Proceedings of Robophilosophy 2016. IOS Press. pp. 145-154.
    We study whether robots can satisfy the conditions for agents fit to be held responsible in a normative sense, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. On the basis of Alfred R. Mele’s history-sensitive account of autonomy and responsibility it can be argued that even if robots were to have all the capacities usually required of moral agency, their history (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Agency, qualia and life: connecting mind and body biologically.David Longinotti - 2017 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 43-56.
    Many believe that a suitably programmed computer could act for its own goals and experience feelings. I challenge this view and argue that agency, mental causation and qualia are all founded in the unique, homeostatic nature of living matter. The theory was formulated for coherence with the concept of an agent, neuroscientific data and laws of physics. By this method, I infer that a successful action is homeostatic for its agent and can be caused by a feeling - which does (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Typology of Posthumanism: A Framework for Differentiating Analytic, Synthetic, Theoretical, and Practical Posthumanisms.Matthew E. Gladden - 2016 - In Sapient Circuits and Digitalized Flesh: The Organization as Locus of Technological Posthumanization. Defragmenter Media. pp. 31-91.
    The term ‘posthumanism’ has been employed to describe a diverse array of phenomena ranging from academic disciplines and artistic movements to political advocacy campaigns and the development of commercial technologies. Such phenomena differ widely in their subject matter, purpose, and methodology, raising the question of whether it is possible to fashion a coherent definition of posthumanism that encompasses all phenomena thus labelled. In this text, we seek to bring greater clarity to this discussion by formulating a novel conceptual framework for (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Organizational Posthumanism.Matthew E. Gladden - 2016 - In Sapient Circuits and Digitalized Flesh: The Organization as Locus of Technological Posthumanization. Defragmenter Media. pp. 93-131.
    Building on existing forms of critical, cultural, biopolitical, and sociopolitical posthumanism, in this text a new framework is developed for understanding and guiding the forces of technologization and posthumanization that are reshaping contemporary organizations. This ‘organizational posthumanism’ is an approach to analyzing, creating, and managing organizations that employs a post-dualistic and post-anthropocentric perspective and which recognizes that emerging technologies will increasingly transform the kinds of members, structures, systems, processes, physical and virtual spaces, and external ecosystems that are available for organizations (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Introduction to the Topical Collection on AI and Responsibility.Niël Conradie, Hendrik Kempt & Peter Königs - 2022 - Philosophy and Technology 35 (4):1-6.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Philosophy of Online Manipulation.Michael Klenk & Fleur Jongepier (eds.) - 2022 - Routledge.
    Are we being manipulated online? If so, is being manipulated by online technologies and algorithmic systems notably different from human forms of manipulation? And what is under threat exactly when people are manipulated online? This volume provides philosophical and conceptual depth to debates in digital ethics about online manipulation. The contributions explore the ramifications of our increasingly consequential interactions with online technologies such as online recommender systems, social media, user-friendly design, micro-targeting, default-settings, gamification, and real-time profiling. The authors in this (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • The ethics of information warfare.Luciano Floridi & Mariarosaria Taddeo (eds.) - 2014 - Springer International Publishing.
    This book offers an overview of the ethical problems posed by Information Warfare, and of the different approaches and methods used to solve them, in order to provide the reader with a better grasp of the ethical conundrums posed by this new form of warfare. -/- The volume is divided into three parts, each comprising four chapters. The first part focuses on issues pertaining to the concept of Information Warfare and the clarifications that need to be made in order to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Getting Machines to Do Your Dirty Work.Tomi Francis & Todd Karhu - forthcoming - Philosophical Studies:1-15.
    Autonomous systems are machines that can alter their behavior without direct human oversight or control. How ought we to program them to behave? A plausible starting point is given by the Reduction to Acts Thesis, according to which we ought to program autonomous systems to do whatever a human agent ought to do in the same circumstances. Although the Reduction to Acts Thesis is initially appealing, we argue that it is false: it is sometimes permissible to program a machine to (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Who Should Die? The Ethics of Killing in War.Ryan Jenkins & Bradley Strawser (eds.) - 2017 - New York: Oxford University Press.
    This volume collects influential and groundbreaking philosophical work on killing in war. A " of contemporary scholars, this volume serves as a convenient and authoritative collection uniquely suited for university-level teaching and as a reference for ethicists, policymakers, stakeholders, and any student of the morality of war.
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Disengagement with ethics in robotics as a tacit form of dehumanisation.Karolina Zawieska - 2020 - AI and Society 35 (4):869-883.
    Over the past two decades, ethical challenges related to robotics technologies have gained increasing interest among different research and non-academic communities, in particular through the field of roboethics. While the reasons to address roboethics are clear, why not to engage with ethics needs to be better understood. This paper focuses on a limited or lacking engagement with ethics that takes place within some parts of the robotics community and its implications for the conceptualisation of the human being. The underlying assumption (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Punishing Robots – Way Out of Sparrow’s Responsibility Attribution Problem.Maciek Zając - 2020 - Journal of Military Ethics 19 (4):285-291.
    The Laws of Armed Conflict require that war crimes be attributed to individuals who can be held responsible and be punished. Yet assigning responsibility for the actions of Lethal Autonomous Weapon...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • On the indignity of killer robots.Garry Young - 2021 - Ethics and Information Technology 23 (3):473-482.
    Recent discussion on the ethics of killer robots has focused on the supposed lack of respect their deployment would show to combatants targeted, thereby causing their undignified deaths. I present two rebuttals of this argument. The weak rebuttal maintains that while deploying killer robots is an affront to the dignity of combatants, their use should nevertheless be thought of as a pro tanto wrong, making deployment permissible if the affront is outweighed by some right-making feature. This rebuttal is, however, vulnerable (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Safety Engineering for Artificial General Intelligence.Roman Yampolskiy & Joshua Fox - 2013 - Topoi 32 (2):217-226.
    Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In particular, we challenge (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Autonomous weapon systems and responsibility gaps: a taxonomy.Nathan Gabriel Wood - 2023 - Ethics and Information Technology 25 (1):1-14.
    A classic objection to autonomous weapon systems (AWS) is that these could create so-called responsibility gaps, where it is unclear who should be held responsible in the event that an AWS were to violate some portion of the law of armed conflict (LOAC). However, those who raise this objection generally do so presenting it as a problem for AWS as a whole class of weapons. Yet there exists a rather wide range of systems that can be counted as “autonomous weapon (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Autonomous Weapon Systems: A Clarification.Nathan Gabriel Wood - 2023 - Journal of Military Ethics 22 (1):18-32.
    Due to advances in military technology, there has been an outpouring of research on what are known as autonomous weapon systems (AWS). However, it is common in this literature for arguments to be made without first making clear exactly what definitions one is employing, with the detrimental effect that authors may speak past one another or even miss the targets of their arguments. In this article I examine the U.S. Department of Defense and International Committee of the Red Cross definitions (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Who is controlling whom? Reframing “meaningful human control” of AI systems in security.Pascal Vörös, Serhiy Kandul, Thomas Burri & Markus Christen - 2023 - Ethics and Information Technology 25 (1):1-7.
    Decisions in security contexts, including armed conflict, law enforcement, and disaster relief, often need to be taken under circumstances of limited information, stress, and time pressure. Since AI systems are capable of providing a certain amount of relief in such contexts, such systems will become increasingly important, be it as decision-support or decision-making systems. However, given that human life may be at stake in such situations, moral responsibility for such decisions should remain with humans. Hence the idea of “meaningful human (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   32 citations  
  • Descartes and the Question of Direct Doxastic Voluntarism.Rico Vitz - 2010 - Journal of Philosophical Research 35:107-21.
    In this paper, I clarify Descartes’s account of belief, in general, and of judgment, in particular. Then, drawing upon this clarification, I explain the type of direct doxastic voluntarism that he endorses. In particular, I attempt to demonstrate two claims. First, I argue that there is strong textual evidence that, on Descartes’s account, people have the ability to suspend, or to withhold, judgment directly by an act will. Second, I argue that there is weak and inconclusive textual evidence that, on (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Technology as Driver for Morally Motivated Conceptual Engineering.Herman Veluwenkamp, Marianna Capasso, Jonne Maas & Lavinia Marin - 2022 - Philosophy and Technology 35 (3):1-25.
    New technologies are the source of uncertainties about the applicability of moral and morally connotated concepts. These uncertainties sometimes call for conceptual engineering, but it is not often recognized when this is the case. We take this to be a missed opportunity, as a recognition that different researchers are working on the same kind of project can help solve methodological questions that one is likely to encounter. In this paper, we present three case studies where philosophers of technology implicitly engage (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Reasons for Meaningful Human Control.Herman Veluwenkamp - 2022 - Ethics and Information Technology 24 (4):1-9.
    ”Meaningful human control” is a term invented in the political and legal debate on autonomous weapons system, but it is nowadays also used in many other contexts. It is supposed to specify conditions under which an artificial system is under the right kind of control to avoid responsibility gaps: that is, situations in which no moral agent is responsible. Santoni de Sio and Van den Hoven have recently suggested a framework that can be used by system designers to operationalize this (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Advocating an ethical memory model for artificial companions from a human-centred perspective.Patricia A. Vargas, Ylva Fernaeus, Mei Yii Lim, Sibylle Enz, Wan Chin Ho, Mattias Jacobsson & Ruth Ayllet - 2011 - AI and Society 26 (4):329-337.
    This paper considers the ethical implications of applying three major ethical theories to the memory structure of an artificial companion that might have different embodiments such as a physical robot or a graphical character on a hand-held device. We start by proposing an ethical memory model and then make use of an action-centric framework to evaluate its ethical implications. The case that we discuss is that of digital artefacts that autonomously record and store user data, where this data are used (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The cubicle warrior: the marionette of digitalized warfare. [REVIEW]Rinie van Est - 2010 - Ethics and Information Technology 12 (3):289-296.
    In the last decade we have entered the era of remote controlled military technology. The excitement about this new technology should not mask the ethical questions that it raises. A fundamental ethical question is who may be held responsible for civilian deaths. In this paper we will discuss the role of the human operator or so-called ‘cubicle warrior’, who remotely controls the military robots behind visual interfaces. We will argue that the socio-technical system conditions the cubicle warrior to dehumanize the (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Design for values and conceptual engineering.Herman Veluwenkamp & Jeroen van den Hoven - 2023 - Ethics and Information Technology 25 (1):1-12.
    Politicians and engineers are increasingly realizing that values are important in the development of technological artefacts. What is often overlooked is that different conceptualizations of these abstract values lead to different design-requirements. For example, designing social media platforms for deliberative democracy sets us up for technical work on completely different types of architectures and mechanisms than designing for so-called liquid or direct forms of democracy. Thinking about Democracy is not enough, we need to design for the proper conceptualization of these (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character.Shannon Vallor - 2015 - Philosophy and Technology 28 (1):107-124.
    This paper explores the ambiguous impact of new information and communications technologies on the cultivation of moral skills in human beings. Just as twentieth century advances in machine automation resulted in the economic devaluation of practical knowledge and skillsets historically cultivated by machinists, artisans, and other highly trained workers , while also driving the cultivation of new skills in a variety of engineering and white collar occupations, ICTs are also recognized as potential causes of a complex pattern of economic deskilling, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   60 citations  
  • Artificial Consciousness and Artificial Ethics: Between Realism and Social Relationism.Steve Torrance - 2014 - Philosophy and Technology 27 (1):9-29.
    I compare a ‘realist’ with a ‘social–relational’ perspective on our judgments of the moral status of artificial agents (AAs). I develop a realist position according to which the moral status of a being—particularly in relation to moral patiency attribution—is closely bound up with that being’s ability to experience states of conscious satisfaction or suffering (CSS). For a realist, both moral status and experiential capacity are objective properties of agents. A social relationist denies the existence of any such objective properties in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  • The case against robotic warfare: A response to Arkin.Ryan Tonkens - 2012 - Journal of Military Ethics 11 (2):149-168.
    Abstract Semi-autonomous robotic weapons are already carving out a role for themselves in modern warfare. Recently, Ronald Arkin has argued that autonomous lethal robotic systems could be more ethical than humans on the battlefield, and that this marks a significant reason in favour of their development and use. Here I offer a critical response to the position advanced by Arkin. Although I am sympathetic to the spirit of the motivation behind Arkin's project and agree that if we decide to develop (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Should autonomous robots be pacifists?Ryan Tonkens - 2013 - Ethics and Information Technology 15 (2):109-123.
    Currently, the central questions in the philosophical debate surrounding the ethics of automated warfare are (1) Is the development and use of autonomous lethal robotic systems for military purposes consistent with (existing) international laws of war and received just war theory?; and (2) does the creation and use of such machines improve the moral caliber of modern warfare? However, both of these approaches have significant problems, and thus we need to start exploring alternative approaches. In this paper, I ask whether (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Out of character: on the creation of virtuous machines. [REVIEW]Ryan Tonkens - 2012 - Ethics and Information Technology 14 (2):137-149.
    The emerging discipline of Machine Ethics is concerned with creating autonomous artificial moral agents that perform ethically significant actions out in the world. Recently, Wallach and Allen (Moral machines: teaching robots right from wrong, Oxford University Press, Oxford, 2009) and others have argued that a virtue-based moral framework is a promising tool for meeting this end. However, even if we could program autonomous machines to follow a virtue-based moral framework, there are certain pressing ethical issues that need to be taken (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  • A challenge for machine ethics.Ryan Tonkens - 2009 - Minds and Machines 19 (3):421-438.
    That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is both (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   38 citations  
  • The artificial view: toward a non-anthropocentric account of moral patiency.Fabio Tollon - 2020 - Ethics and Information Technology 23 (2):147-155.
    In this paper I provide an exposition and critique of the Organic View of Ethical Status, as outlined by Torrance (2008). A key presupposition of this view is that only moral patients can be moral agents. It is claimed that because artificial agents lack sentience, they cannot be proper subjects of moral concern (i.e. moral patients). This account of moral standing in principle excludes machines from participating in our moral universe. I will argue that the Organic View operationalises anthropocentric intuitions (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Artifacts and affordances: from designed properties to possibilities for action.Fabio Tollon - 2021 - AI and Society 2:1-10.
    In this paper I critically evaluate the value neutrality thesis regarding technology, and find it wanting. I then introduce the various ways in which artifacts can come to influence moral value, and our evaluation of moral situations and actions. Here, following van de Poel and Kroes, I introduce the idea of value sensitive design. Specifically, I show how by virtue of their designed properties, artifacts may come to embody values. Such accounts, however, have several shortcomings. In agreement with Michael Klenk, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Artifacts and affordances: from designed properties to possibilities for action.Fabio Tollon - 2022 - AI and Society 37 (1):239-248.
    In this paper I critically evaluate the value neutrality thesis regarding technology, and find it wanting. I then introduce the various ways in which artifacts can come to influence moral value, and our evaluation of moral situations and actions. Here, following van de Poel and Kroes, I introduce the idea of value sensitive design. Specifically, I show how by virtue of their designed properties, artifacts may come to embody values. Such accounts, however, have several shortcomings. In agreement with Michael Klenk, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Socially responsive technologies: toward a co-developmental path.Daniel W. Tigard, Niël H. Conradie & Saskia K. Nagel - 2020 - AI and Society 35 (4):885-893.
    Robotic and artificially intelligent (AI) systems are becoming prevalent in our day-to-day lives. As human interaction is increasingly replaced by human–computer and human–robot interaction (HCI and HRI), we occasionally speak and act as though we are blaming or praising various technological devices. While such responses may arise naturally, they are still unusual. Indeed, for some authors, it is the programmers or users—and not the system itself—that we properly hold responsible in these cases. Furthermore, some argue that since directing blame or (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Technological Answerability and the Severance Problem: Staying Connected by Demanding Answers.Daniel W. Tigard - 2021 - Science and Engineering Ethics 27 (5):1-20.
    Artificial intelligence and robotic technologies have become nearly ubiquitous. In some ways, the developments have likely helped us, but in other ways sophisticated technologies set back our interests. Among the latter sort is what has been dubbed the ‘severance problem’—the idea that technologies sever our connection to the world, a connection which is necessary for us to flourish and live meaningful lives. I grant that the severance problem is a threat we should mitigate and I ask: how can we stave (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2020 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the optimists (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   32 citations