Switch to: References

Add citations

You must login to add citations.
  1. Hiring, Algorithms, and Choice: Why Interviews Still Matter.Vikram R. Bhargava & Pooria Assadi - 2024 - Business Ethics Quarterly 34 (2):201-230.
    Why do organizations conduct job interviews? The traditional view of interviewing holds that interviews are conducted, despite their steep costs, to predict a candidate’s future performance and fit. This view faces a twofold threat: the behavioral and algorithmic threats. Specifically, an overwhelming body of behavioral research suggests that we are bad at predicting performance and fit; furthermore, algorithms are already better than us at making these predictions in various domains. If the traditional view captures the whole story, then interviews seem (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Mind the gaps: Assuring the safety of autonomous systems from an engineering, ethical, and legal perspective.Simon Burton, Ibrahim Habli, Tom Lawton, John McDermid, Phillip Morgan & Zoe Porter - 2020 - Artificial Intelligence 279 (C):103201.
  • Risk and Responsibility in Context.Adriana Placani & Stearns Broadhead (eds.) - 2023 - New York: Routledge.
    This volume bridges contemporary philosophical conceptions of risk and responsibility and offers an extensive examination of the topic. It shows that risk and responsibility combine in ways that give rise to new philosophical questions and problems. Philosophical interest in the relationship between risk and responsibility continues to rise, due in no small part due to environmental crises, emerging technologies, legal developments, and new medical advances. Despite such interest, scholars are just now working out how to conceive of the links between (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Strictly Human: Limitations of Autonomous Systems.Sadjad Soltanzadeh - 2022 - Minds and Machines 32 (2):269-288.
    Can autonomous systems replace humans in the performance of their activities? How does the answer to this question inform the design of autonomous systems? The study of technical systems and their features should be preceded by the study of the activities in which they play roles. Each activity can be described by its overall goals, governing norms and the intermediate steps which are taken to achieve the goals and to follow the norms. This paper uses the activity realist approach to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2012 - In Peter Adamson (ed.), Stanford Encyclopedia of Philosophy. Stanford Encyclopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  • Ethical Issues with Artificial Ethics Assistants.Elizabeth O'Neill, Michal Klincewicz & Michiel Kemmer - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Experimental Philosophy of Technology.Steven R. Kraaijeveld - 2021 - Philosophy and Technology 34:993-1012.
    Experimental philosophy is a relatively recent discipline that employs experimental methods to investigate the intuitions, concepts, and assumptions behind traditional philosophical arguments, problems, and theories. While experimental philosophy initially served to interrogate the role that intuitions play in philosophy, it has since branched out to bring empirical methods to bear on problems within a variety of traditional areas of philosophy—including metaphysics, philosophy of language, philosophy of mind, and epistemology. To date, no connection has been made between developments in experimental philosophy (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Introduction to the Topical Collection on AI and Responsibility.Niël Conradie, Hendrik Kempt & Peter Königs - 2022 - Philosophy and Technology 35 (4):1-6.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be appropriately ascribed (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • The Sense of Agency in Driving Automation.Wen Wen, Yoshihiro Kuroki & Hajime Asama - 2019 - Frontiers in Psychology 10.
  • Reasons for Meaningful Human Control.Herman Veluwenkamp - 2022 - Ethics and Information Technology 24 (4):1-9.
    ”Meaningful human control” is a term invented in the political and legal debate on autonomous weapons system, but it is nowadays also used in many other contexts. It is supposed to specify conditions under which an artificial system is under the right kind of control to avoid responsibility gaps: that is, situations in which no moral agent is responsible. Santoni de Sio and Van den Hoven have recently suggested a framework that can be used by system designers to operationalize this (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Why We Should Understand Conversational AI as a Tool.Marlies N. van Lingen, Noor A. A. Giesbertz, J. Peter van Tintelen & Karin R. Jongsma - 2023 - American Journal of Bioethics 23 (5):22-24.
    The introduction of chatGPT illustrates the rapid developments within Conversational Artificial Intelligence (CAI) technologies (Gordijn and Have 2023). Ethical reflection and analysis of CAI are c...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Introducing a four-fold way to conceptualize artificial agency.Maud van Lier - 2023 - Synthese 201 (3):1-28.
    Recent developments in AI-research suggest that an AI-driven science might not be that far off. The research of for Melnikov et al. (2018) and that of Evans et al. (2018) show that automated systems can already have a distinctive role in the design of experiments and in directing future research. Common practice in many of the papers devoted to the automation of basic research is to refer to these automated systems as ‘agents’. What is this attribution of agency based on (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • The artificial view: toward a non-anthropocentric account of moral patiency.Fabio Tollon - 2020 - Ethics and Information Technology 23 (2):147-155.
    In this paper I provide an exposition and critique of the Organic View of Ethical Status, as outlined by Torrance (2008). A key presupposition of this view is that only moral patients can be moral agents. It is claimed that because artificial agents lack sentience, they cannot be proper subjects of moral concern (i.e. moral patients). This account of moral standing in principle excludes machines from participating in our moral universe. I will argue that the Organic View operationalises anthropocentric intuitions (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Do Others Mind? Moral Agents Without Mental States.Fabio Tollon - 2021 - South African Journal of Philosophy 40 (2):182-194.
    As technology advances and artificial agents (AAs) become increasingly autonomous, start to embody morally relevant values and act on those values, there arises the issue of whether these entities should be considered artificial moral agents (AMAs). There are two main ways in which one could argue for AMA: using intentional criteria or using functional criteria. In this article, I provide an exposition and critique of “intentional” accounts of AMA. These accounts claim that moral agency should only be accorded to entities (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Technological Answerability and the Severance Problem: Staying Connected by Demanding Answers.Daniel W. Tigard - 2021 - Science and Engineering Ethics 27 (5):1-20.
    Artificial intelligence and robotic technologies have become nearly ubiquitous. In some ways, the developments have likely helped us, but in other ways sophisticated technologies set back our interests. Among the latter sort is what has been dubbed the ‘severance problem’—the idea that technologies sever our connection to the world, a connection which is necessary for us to flourish and live meaningful lives. I grant that the severance problem is a threat we should mitigate and I ask: how can we stave (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2020 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the optimists (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   32 citations  
  • Engineering responsibility.Nicholas Sars - 2022 - Ethics and Information Technology 24 (3):1-10.
    Many optimistic responses have been proposed to bridge the threat of responsibility gaps which artificial systems create. This paper identifies a question which arises if this optimistic project proves successful. On a response-dependent understanding of responsibility, our responsibility practices themselves at least partially determine who counts as a responsible agent. On this basis, if AI or robot technology advance such that AI or robot agents become fitting participants within responsibility exchanges, then responsibility itself might be engineered. If we have good (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them.Filippo Santoni de Sio & Giulio Mecacci - 2021 - Philosophy and Technology 34 (4):1057-1084.
    The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   46 citations  
  • Correctness and Completeness of Programming Instructions for Traffic Circulation.Daniela Glavaničová & Matteo Pascucci - 2021 - Science and Engineering Ethics 27 (6):1-16.
    In the present article we exploit the logical notions of correctness and completeness to provide an analysis of some fundamental problems that can be encountered by a software developer when transforming norms for traffic circulation into programming instructions. Relying on this analysis, we then introduce a question and answer procedure that can be helpful, in case of an accident, to clarify which components of an existing framework should be revised and to what extent software developers can be held responsible.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Correction to: The Responsibility Gap and LAWS: a Critical Mapping of the Debate.Ann-Katrien Oimann - 2023 - Philosophy and Technology 36 (1):1-2.
    AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature (...)
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Responsibility Gap and LAWS: a Critical Mapping of the Debate.Ann-Katrien Oimann - 2023 - Philosophy and Technology 36 (1):1-22.
    AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Why Command Responsibility May (not) Be a Solution to Address Responsibility Gaps in LAWS.Ann-Katrien Oimann - forthcoming - Criminal Law and Philosophy:1-27.
    The possible future use of lethal autonomous weapons systems (LAWS) and the challenges associated with assigning moral responsibility leads to several debates. Some authors argue that the highly autonomous capability of such systems may lead to a so-called responsibility gap in situations where LAWS cause serious violations of international humanitarian law. One proposed solution is the doctrine of command responsibility. Despite the doctrine’s original development to govern human interactions on the battlefield, it is worth considering whether the doctrine of command (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Tools and/or Agents? Reflections on Sedlakova and Trachsel’s Discussion of Conversational Artificial Intelligence.Sven Nyholm - 2023 - American Journal of Bioethics 23 (5):17-19.
    Sedlakova and Trachsel (2023) consider conversational artificial intelligence (CAI) as a new way of providing psychotherapy to patients. This is an important topic, and Sedlakova and Trachsel have...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • The ethics of crashes with self‐driving cars: A roadmap, I.Sven Nyholm - 2018 - Philosophy Compass 13 (7):e12507.
    Self‐driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accordingly, they need to be programmed for how to deal with crash scenarios. Should cars be programmed to always prioritize their owners, to minimize harm, or to respond to crashes on the basis of some other type of principle? The article first discusses whether everyone should have the same “ethics settings.” Next, the oft‐made analogy with the trolley problem is examined. Then (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  • The ethics of crashes with self‐driving cars: A roadmap, II.Sven Nyholm - 2018 - Philosophy Compass 13 (7):e12506.
    Self‐driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accordingly, we need to think about who should be held responsible when self‐driving cars crash and people are injured or killed. We also need to examine what new ethical obligations might be created for car users by the safety potential of self‐driving cars. The article first considers what lessons might be learned from the growing legal literature on responsibility for crashes with (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  • Automated cars meet human drivers: responsible human-robot coordination and the ethics of mixed traffic.Sven Nyholm & Jilles Smids - 2020 - Ethics and Information Technology 22 (4):335-344.
    In this paper, we discuss the ethics of automated driving. More specifically, we discuss responsible human-robot coordination within mixed traffic: i.e. traffic involving both automated cars and conventional human-driven cars. We do three main things. First, we explain key differences in robotic and human agency and expectation-forming mechanisms that are likely to give rise to compatibility-problems in mixed traffic, which may lead to crashes and accidents. Second, we identify three possible solution-strategies for achieving better human-robot coordination within mixed traffic. Third, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • Can a Robot Be a Good Colleague?Sven Nyholm & Jilles Smids - 2020 - Science and Engineering Ethics 26 (4):2169-2188.
    This paper discusses the robotization of the workplace, and particularly the question of whether robots can be good colleagues. This might appear to be a strange question at first glance, but it is worth asking for two reasons. Firstly, some people already treat robots they work alongside as if the robots are valuable colleagues. It is worth reflecting on whether such people are making a mistake. Secondly, having good colleagues is widely regarded as a key aspect of what can make (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Algorithmic Microaggressions.Emma McClure & Benjamin Wald - 2022 - Feminist Philosophy Quarterly 8 (3).
    We argue that machine learning algorithms can inflict microaggressions on members of marginalized groups and that recognizing these harms as instances of microaggressions is key to effectively addressing the problem. The concept of microaggression is also illuminated by being studied in algorithmic contexts. We contribute to the microaggression literature by expanding the category of environmental microaggressions and highlighting the unique issues of moral responsibility that arise when we focus on this category. We theorize two kinds of algorithmic microaggression, stereotyping and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • Autonomous Driving and Perverse Incentives.Wulf Loh & Catrin Misselhorn - 2019 - Philosophy and Technology 32 (4):575-590.
    This paper discusses the ethical implications of perverse incentives with regard to autonomous driving. We define perverse incentives as a feature of an action, technology, or social policy that invites behavior which negates the primary goal of the actors initiating the action, introducing a certain technology, or implementing a social policy. As a special form of means-end-irrationality, perverse incentives are to be avoided from a prudential standpoint, as they prove to be directly self-defeating: They are not just a form of (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Autonomous Driving and Perverse Incentives.Wulf Loh & Catrin Misselhorn - 2019 - Philosophy and Technology 32 (4):575-590.
    This paper discusses the ethical implications of perverse incentives with regard to autonomous driving. We define perverse incentives as a feature of an action, technology, or social policy that invites behavior which negates the primary goal of the actors initiating the action, introducing a certain technology, or implementing a social policy. As a special form of means-end-irrationality, perverse incentives are to be avoided from a prudential standpoint, as they prove to be directly self-defeating: They are not just a form of (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Debunking (the) Retribution (Gap).Steven R. Kraaijeveld - 2020 - Science and Engineering Ethics 26 (3):1315-1328.
    Robotization is an increasingly pervasive feature of our lives. Robots with high degrees of autonomy may cause harm, yet in sufciently complex systems neither the robots nor the human developers may be candidates for moral blame. John Danaher has recently argued that this may lead to a retribution gap, where the human desire for retribution faces a lack of appropriate subjects for retributive blame. The potential social and moral implications of a retribution gap are considerable. I argue that the retributive (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Instrumental Robots.Sebastian Köhler - 2020 - Science and Engineering Ethics 26 (6):3121-3141.
    Advances in artificial intelligence research allow us to build fairly sophisticated agents: robots and computer programs capable of acting and deciding on their own. These systems raise questions about who is responsible when something goes wrong—when such systems harm or kill humans. In a recent paper, Sven Nyholm has suggested that, because current AI will likely possess what we might call “supervised agency”, the theory of responsibility for individual agency is the wrong place to look for an answer to the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Explanation and Agency: exploring the normative-epistemic landscape of the “Right to Explanation”.Esther Keymolen & Fleur Jongepier - 2022 - Ethics and Information Technology 24 (4):1-11.
    A large part of the explainable AI literature focuses on what explanations are in general, what algorithmic explainability is more specifically, and how to code these principles of explainability into AI systems. Much less attention has been devoted to the question of why algorithmic decisions and systems should be explainable and whether there ought to be a right to explanation and why. We therefore explore the normative landscape of the need for AI to be explainable and individuals having a right (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • “I’m afraid I can’t let you do that, Doctor”: meaningful disagreements with AI in medical contexts.Hendrik Kempt, Jan-Christoph Heilinger & Saskia K. Nagel - forthcoming - AI and Society:1-8.
    This paper explores the role and resolution of disagreements between physicians and their diagnostic AI-based decision support systems. With an ever-growing number of applications for these independently operating diagnostic tools, it becomes less and less clear what a physician ought to do in case their diagnosis is in faultless conflict with the results of the DSS. The consequences of such uncertainty can ultimately lead to effects detrimental to the intended purpose of such machines, e.g. by shifting the burden of proof (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • A Taxonomy of Ethical, Legal and Social Implications of Wearable Robots: An Expert Perspective.Alexandra Kapeller, Heike Felzmann, Eduard Fosch-Villaronga & Ann-Marie Hughes - 2020 - Science and Engineering Ethics 26 (6):3229-3247.
    Wearable robots and exoskeletons are relatively new technologies designed for assisting and augmenting human motor functions. Due to their different possible design applications and their intimate connection to the human body, they come with specific ethical, legal, and social issues, which have not been much explored in the recent ELS literature. This paper draws on expert consultations and a literature review to provide a taxonomy of the most important ethical, legal, and social issues of wearable robots. These issues are categorized (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Responsible AI Through Conceptual Engineering.Johannes Himmelreich & Sebastian Köhler - 2022 - Philosophy and Technology 35 (3):1-30.
    The advent of intelligent artificial systems has sparked a dispute about the question of who is responsible when such a system causes a harmful outcome. This paper champions the idea that this dispute should be approached as a conceptual engineering problem. Towards this claim, the paper first argues that the dispute about the responsibility gap problem is in part a conceptual dispute about the content of responsibility and related concepts. The paper then argues that the way forward is to evaluate (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Responsibility for Killer Robots.Johannes Himmelreich - 2019 - Ethical Theory and Moral Practice 22 (3):731-747.
    Future weapons will make life-or-death decisions without a human in the loop. When such weapons inflict unwarranted harm, no one appears to be responsible. There seems to be a responsibility gap. I first reconstruct the argument for such responsibility gaps to then argue that this argument is not sound. The argument assumes that commanders have no control over whether autonomous weapons inflict harm. I argue against this assumption. Although this investigation concerns a specific case of autonomous weapons systems, I take (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  • Trust and resilient autonomous driving systems.Adam Henschke - 2020 - Ethics and Information Technology 22 (1):81-92.
    Autonomous vehicles, and the larger socio-technical systems that they are a part of are likely to have a deep and lasting impact on our societies. Trust is a key value that will play a role in the development of autonomous driving systems. This paper suggests that trust of autonomous driving systems will impact the ways that these systems are taken up, the norms and laws that guide them and the design of the systems themselves. Further to this, in order to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Self-Driving Vehicles—an Ethical Overview.Sven Ove Hansson, Matts-Åke Belin & Björn Lundgren - 2021 - Philosophy and Technology 34 (4):1383-1408.
    The introduction of self-driving vehicles gives rise to a large number of ethical issues that go beyond the common, extremely narrow, focus on improbable dilemma-like scenarios. This article provides a broad overview of realistic ethical issues related to self-driving vehicles. Some of the major topics covered are as follows: Strong opinions for and against driverless cars may give rise to severe social and political conflicts. A low tolerance for accidents caused by driverless vehicles may delay the introduction of driverless systems (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • Digitized Future of Medicine: Challenges for Bioethics.Elena G. Grebenshchikova & Pavel D. Tishchenko - 2020 - Russian Journal of Philosophical Sciences 63 (2):83-103.
    The article discusses the challenges, benefits, and risks that, from a bioethical perspective, arise because of the the development of eHealth projects. The conceptual framework of the research is based on H. Jonas’ principles of the ethics of responsibility and B.G. Yudin’s anthropological ideas on human beings as agents who constantly change their own boundaries in the “zone of phase transitions.” The article focuses on the events taking place in the zone of phase transitions between humans and machines in eHealth. (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Actionless Agent: An Account of Human-CAI Relationships.Charles E. Binkley & Bryan Pilkington - 2023 - American Journal of Bioethics 23 (5):25-27.
    We applaud Sedlakova and Trachsel’s work and their description of conversational artificial intelligence (CAI) as possessing a hybrid nature with features of both a tool and an agent (Sedlakova and...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Distributive justice as an ethical principle for autonomous vehicle behavior beyond hazard scenarios.Manuel Dietrich & Thomas H. Weisswange - 2019 - Ethics and Information Technology 21 (3):227-239.
    Through modern driver assistant systems, algorithmic decisions already have a significant impact on the behavior of vehicles in everyday traffic. This will become even more prominent in the near future considering the development of autonomous driving functionality. The need to consider ethical principles in the design of such systems is generally acknowledged. However, scope, principles and strategies for their implementations are not yet clear. Most of the current discussions concentrate on situations of unavoidable crashes in which the life of human (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • Digitized Future of Medicine: Challenges for Bioethics.Елена Георгиевна Гребенщикова & Павел Дмитриевич Тищенко - 2020 - Russian Journal of Philosophical Sciences 63 (2):83-103.
    The article discusses the challenges, benefits, and risks that, from a bioethical perspective, arise because of the the development of eHealth projects. The conceptual framework of the research is based on H. Jonas’ principles of the ethics of responsibility and B.G. Yudin’s anthropological ideas on human beings as agents who constantly change their own boundaries in the “zone of phase transitions.” The article focuses on the events taking place in the zone of phase transitions between humans and machines in eHealth. (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Retribution-Gap and Responsibility-Loci Related to Robots and Automated Technologies: A Reply to Nyholm.Roos de Jong - 2020 - Science and Engineering Ethics 26 (2):727-735.
    Automated technologies and robots make decisions that cannot always be fully controlled or predicted. In addition to that, they cannot respond to punishment and blame in the ways humans do. Therefore, when automated cars harm or kill people, for example, this gives rise to concerns about responsibility-gaps and retribution-gaps. According to Sven Nyholm, however, automated cars do not pose a challenge on human responsibility, as long as humans can control them and update them. He argues that the agency exercised in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  • The Benefits and Risks of Quantified Relationship Technologies: Response to Open Peer Commentaries on “The Quantified Relationship”.John Danaher, Sven Nyholm & Brian D. Earp - 2018 - American Journal of Bioethics 18 (2):3-6.
    The growth of self-tracking and personal surveillance has given rise to the Quantified Self movement. Members of this movement seek to enhance their personal well-being, productivity, and self-actualization through the tracking and gamification of personal data. The technologies that make this possible can also track and gamify aspects of our interpersonal, romantic relationships. Several authors have begun to challenge the ethical and normative implications of this development. In this article, we build upon this work to provide a detailed ethical analysis (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • Tragic Choices and the Virtue of Techno-Responsibility Gaps.John Danaher - 2022 - Philosophy and Technology 35 (2):1-26.
    There is a concern that the widespread deployment of autonomous machines will open up a number of ‘responsibility gaps’ throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on ‘plugging’ or ‘dissolving’ the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable us to embrace (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Techno-optimism: an Analysis, an Evaluation and a Modest Defence.John Danaher - 2022 - Philosophy and Technology 35 (2):1-29.
    What is techno-optimism and how can it be defended? Although techno-optimist views are widely espoused and critiqued, there have been few attempts to systematically analyse what it means to be a techno-optimist and how one might defend this view. This paper attempts to address this oversight by providing a comprehensive analysis and evaluation of techno-optimism. It is argued that techno-optimism is a pluralistic stance that comes in weak and strong forms. These vary along a number of key dimensions but each (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • A Moral Bind? — Autonomous Weapons, Moral Responsibility, and Institutional Reality.Bartlomiej Chomanski - 2023 - Philosophy and Technology 36 (2):1-14.
    In “Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems—a Moral Gambit” (2022), Mariarosaria Taddeo and Alexander Blanchard answer one of the most vexing issues in current ethics of technology: how to close the so-called “responsibility gap”? Their solution is to require that autonomous weapons systems (AWSs) may only be used if there is some human being who accepts the ex ante responsibility for those actions of the AWS that could not have been predicted or intended (in such cases, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark