Related categories

188 found
Order:
1 — 50 / 188
  1. On Social Machines for Algorithmic Regulation.Nello Cristianini & Teresa Scantamburlo - manuscript
    Autonomous mechanisms have been proposed to regulate certain aspects of society and are already being used to regulate business organisations. We take seriously recent proposals for algorithmic regulation of society, and we identify the existing technologies that can be used to implement them, most of them originally introduced in business contexts. We build on the notion of 'social machine' and we connect it to various ongoing trends and ideas, including crowdsourced task-work, social compiler, mechanism design, reputation management systems, and social (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  2. Zero Tolerance Policy for Autonomous Weapons: Why?Birgitta Dresp-Langley - manuscript
    A brief overview of Autonomous Weapon Systems (AWS) and their different levels of autonomy is provided, followed by a discussion of the risks represented by these systems under the light of the just war principles and insights from research in cybersecurity. Technological progress has brought about the emergence of machines that have the capacity to take human lives without human control. These represent an unprecedented threat to humankind. This commentary starts from the example of chemical weapons, now banned worldwide by (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. The debate on the ethics of AI in health care: a reconstruction and critical review.Jessica Morley, Caio C. V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo & Luciano Floridi - manuscript
    Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  4. On the Logical Impossibility of Solving the Control Problem.Caleb Rudnick - manuscript
    In the philosophy of artificial intelligence (AI) we are often warned of machines built with the best possible intentions, killing everyone on the planet and in some cases, everything in our light cone. At the same time, however, we are also told of the utopian worlds that could be created with just a single superintelligent mind. If we’re ever to live in that utopia (or just avoid dystopia) it’s necessary we solve the control problem. The control problem asks how humans (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. Levels of Self-Improvement in AI and their Implications for AI Safety.Alexey Turchin - manuscript
    Abstract: This article presents a model of self-improving AI in which improvement could happen on several levels: hardware, learning, code and goals system, each of which has several sublevels. We demonstrate that despite diminishing returns at each level and some intrinsic difficulties of recursive self-improvement—like the intelligence-measuring problem, testing problem, parent-child problem and halting risks—even non-recursive self-improvement could produce a mild form of superintelligence by combining small optimizations on different levels and the power of learning. Based on this, we analyze (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. AI Alignment Problem: “Human Values” don’t Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough deconstruction, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence.Alexey Turchin - manuscript
    Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here ways (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. First human upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. Literature Review: What Artificial General Intelligence Safety Researchers Have Written About the Nature of Human Values.Alexey Turchin & David Denkenberger - manuscript
    Abstract: The field of artificial general intelligence (AGI) safety is quickly growing. However, the nature of human values, with which future AGI should be aligned, is underdefined. Different AGI safety researchers have suggested different theories about the nature of human values, but there are contradictions. This article presents an overview of what AGI safety researchers have written about the nature of human values, up to the beginning of 2019. 21 authors were overviewed, and some of them have several theories. A (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  11. Simulation Typology and Termination Risks.Alexey Turchin & Roman Yampolskiy - manuscript
    The goal of the article is to explore what is the most probable type of simulation in which humanity lives (if any) and how this affects simulation termination risks. We firstly explore the question of what kind of simulation in which humanity is most likely located based on pure theoretical reasoning. We suggest a new patch to the classical simulation argument, showing that we are likely simulated not by our own descendants, but by alien civilizations. Based on this, we provide (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  12. AI Risk Denialism.Roman V. Yampolskiy -
    In this work, we survey skepticism regarding AI risk and show parallels with other types of scientific skepticism. We start by classifying different types of AI Risk skepticism and analyze their root causes. We conclude by suggesting some intervention approaches, which may be successful in reducing AI risk skepticism, at least amongst artificial intelligence researchers.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  13. Ethical pitfalls for natural language processing in psychology.Mark Alfano, Emily Sullivan & Amir Ebrahimi Fard - forthcoming - In Morteza Dehghani & Ryan Boyd (eds.), The Atlas of Language Analysis in Psychology. Guilford Press.
    Knowledge is power. Knowledge about human psychology is increasingly being produced using natural language processing (NLP) and related techniques. The power that accompanies and harnesses this knowledge should be subject to ethical controls and oversight. In this chapter, we address the ethical pitfalls that are likely to be encountered in the context of such research. These pitfalls occur at various stages of the NLP pipeline, including data acquisition, enrichment, analysis, storage, and sharing. We also address secondary uses of the results (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  14. Two arguments against human-friendly AI.Ken Daley - forthcoming - AI and Ethics.
    The past few decades have seen a substantial increase in the focus on the myriad ethical implications of artificial intelligence. Included amongst the numerous issues is the existential risk that some believe could arise from the development of artificial general intelligence (AGI) which is an as-of-yet hypothetical form of AI that is able to perform all the same intellectual feats as humans. This has led to extensive research into how humans can avoid losing control of an AI that is at (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  15. The Ethics of Algorithmic Outsourcing in Everyday Life.John Danaher - forthcoming - In Karen Yeung & Martin Lodge (eds.), Algorithmic Regulation. Oxford, UK: Oxford University Press.
    We live in a world in which ‘smart’ algorithmic tools are regularly used to structure and control our choice environments. They do so by affecting the options with which we are presented and the choices that we are encouraged or able to make. Many of us make use of these tools in our daily lives, using them to solve personal problems and fulfill goals and ambitions. What consequences does this have for individual autonomy and how should our legal and regulatory (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Diachronic and synchronic variation in the performance of adaptive machine learning systems: the ethical challenges.Joshua Hatherley & Robert Sparrow - forthcoming - Journal of the American Medical Informatics Association.
    Objectives: Machine learning (ML) has the potential to facilitate “continual learning” in medicine, in which an ML system continues to evolve in response to exposure to new data over time, even after being deployed in a clinical setting. In this article, we provide a tutorial on the range of ethical issues raised by the use of such “adaptive” ML systems in medicine that have, thus far, been neglected in the literature. -/- Target audience: The target audiences for this tutorial are (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  17. Make Them Rare or Make Them Care: Artificial Intelligence and Moral Cost-Sharing.Blake Hereth & Nicholas Evans - forthcoming - In Daniel Schoeni, Tobias Vestner & Kevin Govern (eds.), Ethical Dilemmas in the Global Defense Industry. Oxford University Press.
    The use of autonomous weaponry in warfare has increased substantially over the last twenty years and shows no sign of slowing. Our chapter raises a novel objection to the implementation of autonomous weapons, namely, that they eliminate moral cost-sharing. To grasp the basics of our argument, consider the case of uninhabited aerial vehicles that act autonomously (i.e., LAWS). Imagine that a LAWS terminates a military target and that five civilians die as a side effect of the LAWS bombing. Because LAWS (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  18. Ethics of Artificial Intelligence in Brain and Mental Health.Marcello Ienca & Fabrice Jotterand (eds.) - forthcoming
  19. Quantum of Wisdom.Brett Karlan & Colin Allen - forthcoming - In Greg Viggiano (ed.), Quantum Computing and AI: Social, Ethical, and Geo-Political Implications. Hoboken, NJ: Wiley-Blackwell. pp. 1-6.
    Practical quantum computing devices and their applications to AI in particular are presently mostly speculative. Nevertheless, questions about whether this future technology, if achieved, presents any special ethical issues are beginning to take shape. As with any novel technology, one can be reasonably confident that the challenges presented by "quantum AI" will be a mixture of something new and something old. Other commentators (Sevilla & Moreno 2019), have emphasized continuity, arguing that quantum computing does not substantially affect approaches to value (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  20. Machine morality, moral progress, and the looming environmental disaster.Ben Kenward & Thomas Sinclair - forthcoming - Cognitive Computation and Systems.
    The creation of artificial moral systems requires us to make difficult choices about which of varying human value sets should be instantiated. The industry-standard approach is to seek and encode moral consensus. Here we argue, based on evidence from empirical psychology, that encoding current moral consensus risks reinforcing current norms, and thus inhibiting moral progress. However, so do efforts to encode progressive norms. Machine ethics is thus caught between a rock and a hard place. The problem is particularly acute when (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  21. Safety requirements vs. crashing ethically: what matters most for policies on autonomous vehicles.Björn Lundgren - forthcoming - AI and Society:1-11.
    The philosophical–ethical literature and the public debate on autonomous vehicles have been obsessed with ethical issues related to crashing. In this article, these discussions, including more empirical investigations, will be critically assessed. It is argued that a related and more pressing issue is questions concerning safety. For example, what should we require from autonomous vehicles when it comes to safety? What do we mean by ‘safety’? How do we measure it? In response to these questions, the article will present a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  22. Artificial Intelligence Safety and Security.Yampolskiy Roman (ed.) - forthcoming - CRC Press.
    This book addresses different aspects of the AI control problem as it relates to the development of safe and secure artificial intelligence. It will be the first to address challenges of constructing safe and secure artificially intelligent systems.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  23. Digital suffering: why it's a problem and how to prevent it.Bradford Saad & Adam Bradley - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    As ever more advanced digital systems are created, it becomes increasingly likely that some of these systems will be digital minds, i.e. digital subjects of experience. With digital minds comes the risk of digital suffering. The problem of digital suffering is that of mitigating this risk. We argue that the problem of digital suffering is a high stakes moral problem and that formidable epistemic obstacles stand in the way of solving it. We then propose a strategy for solving it: Access (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  24. Brief Notes on Hard Takeoff, Value Alignment, and Coherent Extrapolated Volition.Gopal P. Sarma - forthcoming - Arxiv Preprint Arxiv:1704.00783.
    I make some basic observations about hard takeoff, value alignment, and coherent extrapolated volition, concepts which have been central in analyses of superintelligent AI systems.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  25. How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - forthcoming - In Carissa Véliz (ed.), Oxford Handbook of Digital Ethics.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an existential (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  26. Robot Ethics 2.0. From Autonomous Cars to Artificial Intelligence—Edited by Patrick Lin, Keith Abney, Ryan Jenkins. New York: Oxford University Press, 2017. Pp xiii + 421. [REVIEW]Agnė Alijauskaitė - 2022 - Erkenntnis 87 (6):3007-3010.
  27. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral error; and (3) 'Human-Like (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  28. Posthuman to Inhuman: mHealth Technologies and the Digital Health Assemblage.Jack Black & Jim Cherrington - 2022 - Theory and Event 25 (4):726--750.
    In exploring the intra-active, relational and material connections between humans and non- humans, proponents of posthumanism advocate a questioning of the ‘human’ beyond its traditional anthropocentric conceptualization. By referring specifically to controversial developments in mHealth applications, this paper critically diverges from posthuman accounts of human/non-human assemblages. Indeed, we argue that, rather than ‘dissolving’ the human subject, the power of assemblages lie in their capacity to highlight the antagonisms and contradictions that inherently affirm the importance of the subject. In outlining this (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  29. Extending the Is-ought Problem to Top-down Artificial Moral Agents.Robert James M. Boyles - 2022 - Symposion: Theoretical and Applied Inquiries in Philosophy and Social Sciences 9 (2):171–189.
    This paper further cashes out the notion that particular types of intelligent systems are susceptible to the is-ought problem, which espouses the thesis that no evaluative conclusions may be inferred from factual premises alone. Specifically, it focuses on top-down artificial moral agents, providing ancillary support to the view that these kinds of artifacts are not capable of producing genuine moral judgements. Such is the case given that machines built via the classical programming approach are always composed of two parts, namely: (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  30. COVID-19 and Singularity: Can the Philippines Survive Another Existential Threat?Robert James M. Boyles, Mark Anthony Dacela, Tyrone Renzo Evangelista & Jon Carlos Rodriguez - 2022 - Asia-Pacific Social Science Review 22 (2):181–195.
    In general, existential threats are those that may potentially result in the extinction of the entire human species, if not significantly endanger its living population. Among the said threats include, but not limited to, pandemics and the impacts of a technological singularity. As regards pandemics, significant work has already been done on how to mitigate, if not prevent, the aftereffects of this type of disaster. For one, certain problem areas on how to properly manage pandemic responses have already been identified, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. Expert responsibility in AI development.Maria Hedlund & Erik Persson - 2022 - AI and Society:1-12.
    The purpose of this paper is to discuss the responsibility of AI experts for guiding the development of AI in a desirable direction. More specifically, the aim is to answer the following research question: To what extent are AI experts responsible in a forward-looking way for effects of AI technology that go beyond the immediate concerns of the programmer or designer? AI experts, in this paper conceptualised as experts regarding the technological aspects of AI, have knowledge and control of AI (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  32. Engineered Wisdom for Learning Machines.Brett Karlan & Colin Allen - 2022 - Journal of Experimental and Theoretical Artificial Intelligence.
    We argue that the concept of practical wisdom is particularly useful for organizing, understanding, and improving human-machine interactions. We consider the relationship between philosophical analysis of wisdom and psychological research into the development of wisdom. We adopt a practical orientation that suggests a conceptual engineering approach is needed, where philosophical work involves refinement of the concept in response to contributions by engineers and behavioral scientists. The former are tasked with encoding as much wise design as possible into machines themselves, as (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Basic issues in AI policy.Vincent C. Müller - 2022 - In Maria Amparo Grau-Ruiz (ed.), Interactive robotics: Legal, ethical, social and economic aspects. Cham: Springer. pp. 3-9.
    This extended abstract summarises some of the basic points of AI ethics and policy as they present themselves now. We explain the notion of AI, the main ethical issues in AI and the main policy aims and means.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34. Ethical Issues with Artificial Ethics Assistants.Elizabeth O'Neill, Michal Klincewicz & Michiel Kemmer - 2022 - In Oxford Handbook of Digital Ethics. Oxford: Oxford University Press.
    This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  35. Why Machines Will Never Rule the World: Artificial Intelligence without Fear by Jobst Landgrebe & Barry Smith (Book review). [REVIEW]Walid S. Saba - 2022 - Journal of Knowledge Structures and Systems 3 (4):38-41.
    Whether it was John Searle’s Chinese Room argument (Searle, 1980) or Roger Penrose’s argument of the non-computable nature of a mathematician’s insight – an argument that was based on Gödel’s Incompleteness theorem (Penrose, 1989), we have always had skeptics that questioned the possibility of realizing strong Artificial Intelligence (AI), or what has become known by Artificial General Intelligence (AGI). But this new book by Landgrebe and Smith (henceforth, L&S) is perhaps the strongest argument ever made against strong AI. It is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  36. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  37. The ethical use of artificial intelligence in human resource management: a decision-making framework.Sarah Bankins - 2021 - Ethics and Information Technology 23 (4):841-854.
    Artificial intelligence is increasingly inputting into various human resource management functions, such as sourcing job applicants and selecting staff, allocating work, and offering personalized career coaching. While the use of AI for such tasks can offer many benefits, evidence suggests that without careful and deliberate implementation its use also has the potential to generate significant harms. This raises several ethical concerns regarding the appropriateness of AI deployment to domains such as HRM, which directly deal with managing sometimes sensitive aspects of (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Primer on an ethics of AI-based decision support systems in the clinic.Matthias Braun, Patrik Hummel, Susanne Beck & Peter Dabrock - 2021 - Journal of Medical Ethics 47 (12):3-3.
    Making good decisions in extremely complex and difficult processes and situations has always been both a key task as well as a challenge in the clinic and has led to a large amount of clinical, legal and ethical routines, protocols and reflections in order to guarantee fair, participatory and up-to-date pathways for clinical decision-making. Nevertheless, the complexity of processes and physical phenomena, time as well as economic constraints and not least further endeavours as well as achievements in medicine and healthcare (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  39. Dynamic Cognition Applied to Value Learning in Artificial Intelligence.Nythamar De Oliveira & Nicholas Corrêa - 2021 - Aoristo - International Journal of Phenomenology, Hermeneutics and Metaphysics 4 (2):185-199.
    Experts in Artificial Intelligence (AI) development predict that advances in the dvelopment of intelligent systems and agents will reshape vital areas in our society. Nevertheless, if such an advance isn't done with prudence, it can result in negative outcomes for humanity. For this reason, several researchers in the area are trying to develop a robust, beneficial, and safe concept of artificial intelligence. Currently, several of the open problems in the field of AI research arise from the difficulty of avoiding unwanted (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40. The Unfounded Bias Against Autonomous Weapons Systems.Áron Dombrovszki - 2021 - Információs Társadalom 21 (2):13–28.
    Autonomous Weapons Systems (AWS) have not gained a good reputation in the past. This attitude is odd if we look at the discussion of other-usually highly anticipated-AI-technologies, like autonomous vehicles (AVs); whereby even though these machines evoke very similar ethical issues, philosophers' attitudes towards them are constructive. In this article, I try to prove that there is an unjust bias against AWS because almost every argument against them is effective against AVs too. I start with the definition of "AWS." Then, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  41. Inscrutable Processes: Algorithms, Agency, and Divisions of Deliberative Labour.Marinus Ferreira - 2021 - Journal of Applied Philosophy 38 (4):646-661.
    As the use of algorithmic decision‐making becomes more commonplace, so too does the worry that these algorithms are often inscrutable and our use of them is a threat to our agency. Since we do not understand why an inscrutable process recommends one option over another, we lose our ability to judge whether the guidance is appropriate and are vulnerable to being led astray. In response, I claim that a process being inscrutable does not automatically make its guidance inappropriate. This phenomenon (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  42. Who Should Bear the Risk When Self-Driving Vehicles Crash?Antti Kauppinen - 2021 - Journal of Applied Philosophy 38 (4):630-645.
    The moral importance of liability to harm has so far been ignored in the lively debate about what self-driving vehicles should be programmed to do when an accident is inevitable. But liability matters a great deal to just distribution of risk of harm. While morality sometimes requires simply minimizing relevant harms, this is not so when one party is liable to harm in virtue of voluntarily engaging in activity that foreseeably creates a risky situation, while having reasonable alternatives. On plausible (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  43. Combating Disinformation with AI: Epistemic and Ethical Challenges.Benjamin Lange & Ted Lechterman - 2021 - IEEE International Symposium on Ethics in Engineering, Science and Technology (ETHICS) 1:1-5.
    AI-supported methods for identifying and combating disinformation are progressing in their development and application. However, these methods face a litany of epistemic and ethical challenges. These include (1) robustly defining disinformation, (2) reliably classifying data according to this definition, and (3) navigating ethical risks in the deployment of countermeasures, which involve a mixture of harms and benefits. This paper seeks to expose and offer preliminary analysis of these challenges.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  44. A Citizen's Guide to Artificial Intelligence.James Maclaurin, John Danaher, John Zerilli, Colin Gavaghan, Alistair Knott, Joy Liddicoat & Merel Noorman - 2021 - Cambridge, MA, USA: MIT Press.
    A concise but informative overview of AI ethics and policy. -/- Artificial intelligence, or AI for short, has generated a staggering amount of hype in the past several years. Is it the game-changer it's been cracked up to be? If so, how is it changing the game? How is it likely to affect us as customers, tenants, aspiring homeowners, students, educators, patients, clients, prison inmates, members of ethnic and sexual minorities, and voters in liberal democracies? Authored by experts in fields (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  45. Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  46. Cultivating Moral Attention: a Virtue-Oriented Approach to Responsible Data Science in Healthcare.Emanuele Ratti & Mark Graves - 2021 - Philosophy and Technology 34 (4):1819-1846.
    In the past few years, the ethical ramifications of AI technologies have been at the center of intense debates. Considerable attention has been devoted to understanding how a morally responsible practice of data science can be promoted and which values have to shape it. In this context, ethics and moral responsibility have been mainly conceptualized as compliance to widely shared principles. However, several scholars have highlighted the limitations of such a principled approach. Drawing from microethics and the virtue theory tradition, (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  47. Do Automated Vehicles Face Moral Dilemmas? A Plea for a Political Approach.Javier Rodríguez-Alcázar, Lilian Bermejo-Luque & Alberto Molina-Pérez - 2021 - Philosophy and Technology 34:811-832.
    How should automated vehicles react in emergency circumstances? Most research projects and scientific literature deal with this question from a moral perspective. In particular, it is customary to treat emergencies involving AVs as instances of moral dilemmas and to use the trolley problem as a framework to address such alleged dilemmas. Some critics have pointed out some shortcomings of this strategy and have urged to focus on mundane traffic situations instead of trolley cases involving AVs. Besides, these authors rightly point (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  48. Hey, Google, leave those kids alone: Against hypernudging children in the age of big data.James Smith & Tanya de Villiers-Botha - 2021 - AI and Society.
    Children continue to be overlooked as a topic of concern in discussions around the ethical use of people’s data and information. Where children are the subject of such discussions, the focus is often primarily on privacy concerns and consent relating to the use of their data. This paper highlights the unique challenges children face when it comes to online interferences with their decision-making, primarily due to their vulnerability, impressionability, the increased likelihood of disclosing personal information online, and their developmental capacities. (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  49. AI ethics and the banality of evil.Payman Tajalli - 2021 - Ethics and Information Technology 23 (3):447-454.
    In this paper, I draw on Hannah Arendt’s notion of ‘banality of evil’ to argue that as long as AI systems are designed to follow codes of ethics or particular normative ethical theories chosen by us and programmed in them, they are Eichmanns destined to commit evil. Since intelligence alone is not sufficient for ethical decision making, rather than strive to program AI to determine the right ethical decision based on some ethical theory or criteria, AI should be concerned with (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  50. How should artificial agents make risky choices on our behalf?Johanna Thoma - 2021 - LSE Philosophy Blog.
1 — 50 / 188