About this topic
Summary Ethical issues associated with AI are proliferating and rising to popular attention as intelligent machines become ubiquitous. For example, AIs can and do model aspects essential to moral agency and so offer tools for the investigation of consciousness and other aspects of cognition contributing to moral status (either ascribed or achieved). This has deep implications for our understanding of moral agency, and so of systems of ethics meant to account for and to provide for the development of such capacities. This raises the issue of responsible and/or blameworthy AIs operating openly in general society, with deep implications again for systems of ethics which must accommodate moral AIs. Consider also that human social infrastructure (e.g. energy grids, mass-transit systems) are increasingly moderated by increasingly intelligent machines. This alone raises many moral/ethical concerns. For example, who or what is responsible in the case of an accident due to system error, or due to design flaws, or due to proper operation outside of anticipated constraints? Finally, as AIs become increasingly intelligent, there seems some legitimate concern over the potential for AIs to manage human systems according to AI values, rather than as directly programmed by human designers. These issues often bear on the long-term safety of intelligent systems, and not only for individual human beings, but for the human race and life on Earth as a whole. These issues and many others are central to Ethics of AI. 
Key works Some works: Bostrom manuscriptMüller 2014, Müller 2016, Etzioni & Etzioni 2017, Dubber et al 2020, Tasioulas 2019, Müller 2021
Introductions Müller 2013, Gunkel 2012, Coeckelbergh 2020, Gordon et al 2021, see also  https://plato.stanford.edu/entries/ethics-ai/
Related categories

1768 found
Order:
1 — 50 / 1768
Material to categorize
  1. Nietzsche and the Machines.Sebastian Sunday Grève - 2021 - The Philosophers' Magazine 93:12-15.
    Sebastian Sunday Grève calls on us to decide what kind of life with machines we want.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
Moral Status of Artificial Systems
  1. Existential Risk From AI and Orthogonality: Can We Have It Both Ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio:1-12.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  2. Quantum of Wisdom.Brett Karlan & Colin Allen - forthcoming - In Greg Viggiano (ed.), Quantum Computing and AI: Social, Ethical, and Geo-Political Implications. Toronto, ON, Canada: University of Toronto Press. pp. 1-6.
    Practical quantum computing devices and their applications to AI in particular are presently mostly speculative. Nevertheless, questions about whether this future technology, if achieved, presents any special ethical issues are beginning to take shape. As with any novel technology, one can be reasonably confident that the challenges presented by "quantum AI" will be a mixture of something new and something old. Other commentators (Sevilla & Moreno 2019), have emphasized continuity, arguing that quantum computing does not substantially affect approaches to value (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. Do Automated Vehicles Face Moral Dilemmas? A Plea for a Political Approach.Javier Rodríguez-Alcázar, Lilian Bermejo-Luque & Alberto Molina-Pérez - 2020 - Philosophy and Technology 34 (4):811-832.
    How should automated vehicles react in emergency circumstances? Most research projects and scientific literature deal with this question from a moral perspective. In particular, it is customary to treat emergencies involving AVs as instances of moral dilemmas and to use the trolley problem as a framework to address such alleged dilemmas. Some critics have pointed out some shortcomings of this strategy and have urged to focus on mundane traffic situations instead of trolley cases involving AVs. Besides, these authors rightly point (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  4. Human Goals Are Constitutive of Agency in Artificial Intelligence.Elena Popa - 2021 - Philosophy and Technology 34 (4):1731-1750.
    The question whether AI systems have agency is gaining increasing importance in discussions of responsibility for AI behavior. This paper argues that an approach to artificial agency needs to be teleological, and consider the role of human goals in particular if it is to adequately address the issue of responsibility. I will defend the view that while AI systems can be viewed as autonomous in the sense of identifying or pursuing goals, they rely on human goals and other values incorporated (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  5. A Citizen's Guide to Artificial Intelligence.James Maclaurin, John Danaher, John Zerilli, Colin Gavaghan, Alistair Knott, Joy Liddicoat & Merel Noorman - 2021 - Cambridge, MA, USA: MIT Press.
    A concise but informative overview of AI ethics and policy. -/- Artificial intelligence, or AI for short, has generated a staggering amount of hype in the past several years. Is it the game-changer it's been cracked up to be? If so, how is it changing the game? How is it likely to affect us as customers, tenants, aspiring homeowners, students, educators, patients, clients, prison inmates, members of ethnic and sexual minorities, and voters in liberal democracies? Authored by experts in fields (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. Moral Imitation: Can An Algorithm Really Be Ethical?Anuj Puri - 2020 - Rutgers Law Record 48 (1):47-57.
    Introduction of algorithms in the realm of public administration bears the risk of reducing moral dilemmas to epistemic probabilities. This paper explores the interlinkages between attribution of moral agency on algorithms and algorithmic injustice. While challenging some of the fundamental assumptions underlying ethical machines, I argue that the moral algorithm claim is inherently flawed and has particularly severe consequences when applied to algorithms making fateful decisions regarding an individual’s life. I contend that free will, consciousness and moral intentionality are sine (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - forthcoming - Philosophy and Technology.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  8. On and Beyond Artifacts in Moral Relations: Accounting for Power and Violence in Coeckelbergh’s Social Relationism.Fabio Tollon & Kiasha Naidoo - forthcoming - AI and Society:1-10.
    The ubiquity of technology in our lives and its culmination in artificial intelligence raises questions about its role in our moral considerations. In this paper, we address a moral concern in relation to technological systems given their deep integration in our lives. Coeckelbergh develops a social-relational account, suggesting that it can point us toward a dynamic, historicised evaluation of moral concern. While agreeing with Coeckelbergh’s move away from grounding moral concern in the ontological properties of entities, we suggest that it (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  9. Anthropomorphism and the Impact on the Perception and Implementation of AI Systems.Marie Oldfield -
    Anthropomorphism has long been used as a way for humans to make sense of their surroundings. By converting abstract concepts into objects or concepts that we can relate to we discover a common language with which we can communicate i.e "by which one thing is described in terms of another" ?. Anthropomorphism is based in multiple fields such as, sociology, psychology, neurology philosophy etc. This technique has been seen across history in such fields as religion, fables and folk takes where (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  10. The Ethics of Generating Posthumans: Philosophical and Theological Reflections on Bringing New Persons Into Existence.Trevor Stammers - 2022 - London, UK: Bloomsbury Academic.
    Is it possible, ethically speaking, to create posthuman and transhuman persons from a religious perspective? Who is responsible for post and transhuman creation? Can post and transhuman persons be morally accountable? Addressing such pressing ethical questions around post and transhuman creation, this volume considers the philosophical and theological arguments that define and stimulate contemporary debate. Contributors consider the full implications of creating post and transhuman beings by highlighting the role of new technologies in shaping new forms of consciousness, as well (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  11. Ethics of Artificial Intelligence.S. Matthew Liao (ed.) - 2020 - Oxford University Press.
    "Featuring seventeen original essays on the ethics of Artificial Intelligence by some of the most prominent AI scientists and academic philosophers today, this volume represents the state-of-the-art thinking in this fast-growing field and highlights some of the central themes in AI and morality such as how to build ethics into AI, how to address mass unemployment as a result of automation, how to avoiding designing AI systems that perpetuate existing biases, and how to determine whether an AI is conscious. As (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. The Oxford Handbook of Ethics of Ai.Markus Dirk Dubber, Frank Pasquale & Sunit Das (eds.) - 2020 - Oxford Handbooks.
    This 44-chapter volume tackles a quickly-evolving field of inquiry, mapping the existing discourse as part of a general attempt to place current developments in historical context; at the same time, breaking new ground in taking on novel subjects and pursuing fresh approaches. The term "A.I." is used to refer to a broad range of phenomena, from machine learning and data mining to artificial general intelligence. The recent advent of more sophisticated AI systems, which function with partial or full autonomy and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13. The Question of Algorithmic Personhood and Being (Or: On the Tenuous Nature of Human Status and Humanity Tests in Virtual Spaces—Why All Souls Are ‘Necessarily’ Equal When Considered as Energy).Tyler Jaynes - 2021 - J (2571-8800) 3 (4):452-475.
    What separates the unique nature of human consciousness and that of an entity that can only perceive the world via strict logic-based structures? Rather than assume that there is some potential way in which logic-only existence is non-feasible, our species would be better served by assuming that such sentient existence is feasible. Under this assumption, artificial intelligence systems (AIS), which are creations that run solely upon logic to process data, even with self-learning architectures, should therefore not face the opposition they (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14. How Do Technological Artefacts Embody Moral Values?Michael Klenk - 2020 - Philosophy and Technology 34 (3):525-544.
    According to some philosophers of technology, technology embodies moral values in virtue of its functional properties and the intentions of its designers. But this paper shows that such an account makes the values supposedly embedded in technology epistemically opaque and that it does not allow for values to change. Therefore, to overcome these shortcomings, the paper introduces the novel Affordance Account of Value Embedding as a superior alternative. Accordingly, artefacts bear affordances, that is, artefacts make certain actions likelier given the (...)
    Remove from this list   Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   6 citations  
  15. Walking Through the Turing Wall.Albert Efimov - forthcoming - In Teces.
    Can the machines that play board games or recognize images only in the comfort of the virtual world be intelligent? To become reliable and convenient assistants to humans, machines need to learn how to act and communicate in the physical reality, just like people do. The authors propose two novel ways of designing and building Artificial General Intelligence (AGI). The first one seeks to unify all participants at any instance of the Turing test – the judge, the machine, the human (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. The Moral Status of Technical Artefacts.Peter Kroes (ed.) - 2014 - Springer.
    This book considers the question: to what extent does it make sense to qualify technical artefacts as moral entities? The authors’ contributions trace recent proposals and topics including instrumental and non-instrumental values of artefacts, agency and artefactual agency, values in and around technologies, and the moral significance of technology. The editors’ introduction explains that as ‘agents’ rather than simply passive instruments, technical artefacts may actively influence their users, changing the way they perceive the world, the way they act in the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  17. Artificial Agents and Their Moral Nature.Luciano Floridi - 2014 - In Peter Kroes (ed.), The moral status of technical artefacts. pp. 185–212.
    Artificial agents, particularly but not only those in the infosphere Floridi (Information – A very short introduction. Oxford University Press, Oxford, 2010a), extend the class of entities that can be involved in moral situations, for they can be correctly interpreted as entities that can perform actions with good or evil impact (moral agents). In this chapter, I clarify the concepts of agent and of artificial agent and then distinguish between issues concerning their moral behaviour vs. issues concerning their responsibility. The (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  18. The Moral Consideration of Artificial Entities: A Literature Review.Jamie Harris & Jacy Reese Anthis - 2021 - Science and Engineering Ethics 27 (4):1-95.
    Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  19. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Artificial Moral Patients: Mentality, Intentionality, and Systematicity.Howard Nye & Tugba Yoldas - 2021 - International Review of Information Ethics 29:1-10.
    In this paper, we defend three claims about what it will take for an AI system to be a basic moral patient to whom we can owe duties of non-maleficence not to harm her and duties of beneficence to benefit her: (1) Moral patients are mental patients; (2) Mental patients are true intentional systems; and (3) True intentional systems are systematically flexible. We suggest that we should be particularly alert to the possibility of such systematically flexible true intentional systems developing (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21. Anti-Natalism and the Creation of Artificial Minds.Bartek Chomanski - forthcoming - Journal of Applied Philosophy.
    Must opponents of creating conscious artificial agents embrace anti-natalism? Must anti-natalists be against the creation of conscious artificial agents? This article examines three attempts to argue against the creation of potentially conscious artificial intelligence (AI) in the context of these questions. The examination reveals that the argumentative strategy each author pursues commits them to the anti-natalist position with respect to procreation; that is to say, each author's argument, if applied consistently, should lead them to embrace the conclusion that procreation is, (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  22. Do Others Mind? Moral Agents Without Mental States.Fabio Tollon - 2021 - South African Journal of Philosophy 40 (2):182-194.
    As technology advances and artificial agents (AAs) become increasingly autonomous, start to embody morally relevant values and act on those values, there arises the issue of whether these entities should be considered artificial moral agents (AMAs). There are two main ways in which one could argue for AMA: using intentional criteria or using functional criteria. In this article, I provide an exposition and critique of “intentional” accounts of AMA. These accounts claim that moral agency should only be accorded to entities (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  23. Tecno-especies: la humanidad que se hace a sí misma y los desechables.Mateja Kovacic & María G. Navarro - 2021 - Bajo Palabra. Revista de Filosofía 27 (II Epoca):45-62.
    Popular culture continues fuelling public imagination with things, human and non-human, that we might beco-me or confront. Besides robots, other significant tropes in popular fiction that generated images include non-human humans and cyborgs, wired into his-torically varying sociocultural realities. Robots and artificial intelligence are re-defining the natural order and its hierar-chical structure. This is not surprising, as natural order is always in flux, shaped by new scientific discoveries, especially the reading of the genetic code, that reveal and redefine relationships between (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  24. The Moral Addressor Account of Moral Agency.Dorna Behdadi - manuscript
    According to the practice-focused approach to moral agency, a participant stance towards an entity is warranted by the extent to which this entity qualifies as an apt target of ascriptions of moral responsibility, such as blame. Entities who are not eligible for such reactions are exempted from moral responsibility practices, and thus denied moral agency. I claim that many typically exempted cases may qualify as moral agents by being eligible for a distinct participant stance. When we participate in moral responsibility (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  25. Is It Time for Robot Rights? Moral Status in Artificial Entities.Vincent C. Müller - 2021 - Ethics and Information Technology (4):1-9.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  26. Who Is Responsible for Killer Robots? Autonomous Weapons, Group Agency, and the Military‐Industrial Complex.Isaac Taylor - 2021 - Journal of Applied Philosophy 38 (2):320-334.
  27. Employing Lethal Autonomous Weapon Systems.Matti Häyry - 2020 - International Journal of Applied Philosophy 34 (2):173-181.
    The ethics of warfare and military leadership must pay attention to the rapidly increasing use of artificial intelligence and machines. Who is responsible for the decisions made by a machine? Do machines make decisions? May they make them? These issues are of particular interest in the context of Lethal Autonomous Weapon Systems. Are they autonomous or just automated? Do they violate the international humanitarian law which requires that humans must always be responsible for the use of lethal force and for (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28. Moral Zombies: Why Algorithms Are Not Moral Agents.Carissa Véliz - forthcoming - AI and Society:1-11.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Prolegómenos a una ética para la robótica social.Júlia Pareto Boada - 2021 - Dilemata 34:71-87.
    Social robotics has a high disruptive potential, for it expands the field of application of intelligent technology to practical contexts of a relational nature. Due to their capacity to “intersubjectively” interact with people, social robots can take over new roles in our daily activities, multiplying the ethical implications of intelligent robotics. In this paper, we offer some preliminary considerations for the ethical reflection on social robotics, so that to clarify how to correctly orient the critical-normative thinking in this arduous task. (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  30. In search of the moral status of AI: why sentience is a strong argument.Martin Gibert & Dominic Martin - 2021 - AI and Society 1:1-12.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with the (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  31. A Framework for Grounding the Moral Status of Intelligent Machines.Michael Scheessele - 2018 - AIES '18, February 2–3, 2018, New Orleans, LA, USA.
    I propose a framework, derived from moral theory, for assessing the moral status of intelligent machines. Using this framework, I claim that some current and foreseeable intelligent machines have approximately as much moral status as plants, trees, and other environmental entities. This claim raises the question: what obligations could a moral agent (e.g., a normal adult human) have toward an intelligent machine? I propose that the threshold for any moral obligation should be the "functional morality" of Wallach and Allen [20], (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  32. Liability for Robots: Sidestepping the Gaps.Bartek Chomanski - 2021 - Philosophy and Technology 34 (4):1013-1032.
    In this paper, I outline a proposal for assigning liability for autonomous machines modeled on the doctrine of respondeat superior. I argue that the machines’ users’ or designers’ liability should be determined by the manner in which the machines are created, which, in turn, should be responsive to considerations of the machines’ welfare interests. This approach has the twin virtues of promoting socially beneficial design of machines, and of taking their potential moral patiency seriously. I then argue for abandoning the (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  33. The Hard Limit on Human Nonanthropocentrism.Michael R. Scheessele - forthcoming - AI and Society:1-17.
    There may be a limit on our capacity to suppress anthropocentric tendencies toward non-human others. Normally, we do not reach this limit in our dealings with animals, the environment, etc. Thus, continued striving to overcome anthropocentrism when confronted with these non-human others may be justified. Anticipation of super artificial intelligence may force us to face this limit, denying us the ability to free ourselves completely of anthropocentrism. This could be for our own good.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34. Autonomous Weapon Systems: Failing the Principle of Discrimination.Ariel Guersenzvaig - 2018 - IEEE Technology and Society Magazine 37 (1):55-61.
    In this article, I explore the ethical permissibility of autonomous weapon systems (AWSs), also colloquially known as killer robots: robotic weapons systems that are able to identify and engage a target without human intervention. I introduce the subject, highlight key technical issues, and provide necessary definitions and clarifications in order to limit the scope of the discussion. I argue for a (preemptive) ban on AWSs anchored in just war theory and International Humanitarian Law (IHL), which are both briefly introduced below.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  35. On Human Genome Manipulation and Homo Technicus: The Legal Treatment of Non-Natural Human Subjects.Tyler L. Jaynes - 2021 - AI and Ethics 1 (3):331-345.
    Although legal personality has slowly begun to be granted to non-human entities that have a direct impact on the natural functioning of human societies (given their cultural significance), the same cannot be said for computer-based intelligence systems. While this notion has not had a significantly negative impact on humanity to this point in time that only remains the case because advanced computerised intelligence systems (ACIS) have not been acknowledged as reaching human-like levels. With the integration of ACIS in medical assistive (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Pragmatism for a Digital Society: The (In)Significance of Artificial Intelligence and Neural Technology.Matthew Sample & Eric Racine - 2021 - In Orsolya Friedrich, Andreas Wolkenstein, Christoph Bublitz, Ralf J. Jox & Eric Racine (eds.), Clinical Neurotechnology meets Artificial Intelligence. Springer. pp. 81-100.
    Headlines in 2019 are inundated with claims about the “digital society,” making sweeping assertions of societal benefits and dangers caused by a range of technologies. This situation would seem an ideal motivation for ethics research, and indeed much research on this topic is published, with more every day. However, ethics researchers may feel a sense of déjà vu, as they recall decades of other heavily promoted technological platforms, from genomics and nanotechnology to machine learning. How should ethics researchers respond to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. The Measurement Problem of Consciousness.Heather Browning & Walter Veit - 2020 - Philosophical Topics 48 (1):85-108.
    This paper addresses what we consider to be the most pressing challenge for the emerging science of consciousness: the measurement problem of consciousness. That is, by what methods can we determine the presence of and properties of consciousness? Most methods are currently developed through evaluation of the presence of consciousness in humans and here we argue that there are particular problems in application of these methods to nonhuman cases—what we call the indicator validity problem and the extrapolation problem. The first (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Operations of power in autonomous weapon systems: ethical conditions and socio-political prospects.Nik Hynek & Anzhelika Solovyeva - 2021 - AI and Society 36 (1):79-99.
    The purpose of this article is to provide a multi-perspective examination of one of the most important contemporary security issues: weaponized, and especially lethal, artificial intelligence. This technology is increasingly associated with the approaching dramatic change in the nature of warfare. What becomes particularly important and evermore intensely contested is how it becomes embedded with and concurrently impacts two social structures: ethics and law. While there has not been a global regime banning this technology, regulatory attempts at establishing a ban (...)
    Remove from this list   Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  39. Artificial intelligence and moral rights.Martin Miernicki & Irene Ng - 2021 - AI and Society 36 (1):319-329.
    Whether copyrights should exist in content generated by an artificial intelligence is a frequently discussed issue in the legal literature. Most of the discussion focuses on economic rights, whereas the relationship of artificial intelligence and moral rights remains relatively obscure. However, as moral rights traditionally aim at protecting the author’s “personal sphere”, the question whether the law should recognize such protection in the content produced by machines is pressing; this is especially true considering that artificial intelligence is continuously further developed (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  40. Moral Control and Ownership in AI Systems.Raul Gonzalez Fabre, Javier Camacho Ibáñez & Pedro Tejedor Escobar - 2021 - AI and Society 36 (1):289-303.
    AI systems are bringing an augmentation of human capabilities to shape the world. They may also drag a replacement of human conscience in large chunks of life. AI systems can be designed to leave moral control in human hands, to obstruct or diminish that moral control, or even to prevent it, replacing human morality with pre-packaged or developed ‘solutions’ by the ‘intelligent’ machine itself. Artificial Intelligent systems are increasingly being used in multiple applications and receiving more attention from the public (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  41. The Hard Problem of AI Rights.Adam J. Andreotta - 2021 - AI and Society 36 (1):19-32.
    In the past few years, the subject of AI rights—the thesis that AIs, robots, and other artefacts (hereafter, simply ‘AIs’) ought to be included in the sphere of moral concern—has started to receive serious attention from scholars. In this paper, I argue that the AI rights research program is beset by an epistemic problem that threatens to impede its progress—namely, a lack of a solution to the ‘Hard Problem’ of consciousness: the problem of explaining why certain brain states give rise (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  42. Rights for Robots: Artificial Intelligence, Animal and Environmental Law (2020) by Joshua Gellers. [REVIEW]Kamil Mamak - 2021 - Science and Engineering Ethics 27 (3):1-4.
  43. Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare.Jai Galliott, Duncan MacIntosh & Jens David Ohlin (eds.) - 2021 - New York: Oxford University Press.
    The question of whether new rules or regulations are required to govern, restrict, or even prohibit the use of autonomous weapon systems has been the subject of debate for the better part of a decade. Despite the claims of advocacy groups, the way ahead remains unclear since the international community has yet to agree on a specific definition of Lethal Autonomous Weapon Systems and the great powers have largely refused to support an effective ban. In this vacuum, the public has (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  44. Towards a Middle-Ground Theory of Agency for Artificial Intelligence.Louis Longin - 2020 - In M. Nørskov, J. Seibt & O. Quick (eds.), Culturally Sustainable Social Robotics: Proceedings of Robophilosophy 2020. Amsterdam, Netherlands: pp. 17-26.
    The recent rise of artificial intelligence (AI) systems has led to intense discussions on their ability to achieve higher-level mental states or the ethics of their implementation. One question, which so far has been neglected in the literature, is the question of whether AI systems are capable of action. While the philosophical tradition appeals to intentional mental states, others have argued for a widely inclusive theory of agency. In this paper, I will argue for a gradual concept of agency because (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45. Debate: What is Personhood in the Age of AI?David J. Gunkel & Jordan Joseph Wales - 2021 - AI and Society 36:473–486.
    In a friendly interdisciplinary debate, we interrogate from several vantage points the question of “personhood” in light of contemporary and near-future forms of social AI. David J. Gunkel approaches the matter from a philosophical and legal standpoint, while Jordan Wales offers reflections theological and psychological. Attending to metaphysical, moral, social, and legal understandings of personhood, we ask about the position of apparently personal artificial intelligences in our society and individual lives. Re-examining the “person” and questioning prominent construals of that category, (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  46. Empathy and Instrumentalization: Late Ancient Cultural Critique and the Challenge of Apparently Personal Robots.Jordan Joseph Wales - 2020 - In Marco Nørskov, Johanna Seibt & Oliver Santiago Quick (eds.), Culturally Sustainable Social Robotics: Proceedings of Robophilosophy 2020. Amsterdam: IOS Press. pp. 114-124.
    According to a tradition that we hold variously today, the relational person lives most personally in affective and cognitive empathy, whereby we enter subjective communion with another person. Near future social AIs, including social robots, will give us this experience without possessing any subjectivity of their own. They will also be consumer products, designed to be subservient instruments of their users’ satisfaction. This would seem inevitable. Yet we cannot live as personal when caught between instrumentalizing apparent persons (slaveholding) or numbly (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47. Surrogates and Artificial Intelligence: Why AI Trumps Family.Ryan Hubbard & Jake Greenblum - 2020 - Science and Engineering Ethics 26 (6):3217-3227.
    The increasing accuracy of algorithms to predict values and preferences raises the possibility that artificial intelligence technology will be able to serve as a surrogate decision-maker for incapacitated patients. Following Camillo Lamanna and Lauren Byrne, we call this technology the autonomy algorithm. Such an algorithm would mine medical research, health records, and social media data to predict patient treatment preferences. The possibility of developing the AA raises the ethical question of whether the AA or a relative ought to serve as (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48. Instrumental Robots.Sebastian Köhler - 2020 - Science and Engineering Ethics 26 (6):3121-3141.
    Advances in artificial intelligence research allow us to build fairly sophisticated agents: robots and computer programs capable of acting and deciding on their own. These systems raise questions about who is responsible when something goes wrong—when such systems harm or kill humans. In a recent paper, Sven Nyholm has suggested that, because current AI will likely possess what we might call “supervised agency”, the theory of responsibility for individual agency is the wrong place to look for an answer to the (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  49. Legal Person- or Agenthood of Artificial Intelligence Technologies.Tanel Kerikmäe, Peeter Müürsepp, Henri Mart Pihl, Ondrej Ondrej Hamuľák & Hovsep Kocharyan - 2020 - Acta Baltica Historiae Et Philosophiae Scientiarum 8 (2):73-92.
    Artificial intelligence is developing rapidly. There are technologies available that fulfil several tasks better than humans can and even behave like humans to some extent. Thus, the situation prompts the question whether AI should be granted legal person- and/or agenthood? There have been similar situations in history where the legal status of slaves or indigenous peoples was discussed. Still, in those historical questions, the subjects under study were always natural persons, i.e., they were living beings belonging to the species Homo (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 1768