Switch to: References

Add citations

You must login to add citations.
  1. The argument for near-term human disempowerment through AI.Leonard Dung - 2024 - AI and Society:1-14.
    Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment of humanity, e.g. human extinction, by 2100. It rests on four substantive premises which it motivates and defends: first, the speed of advances in AI capability, as well as the capability level current systems have already reached, suggest that it is practically possible to build AI systems (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2012 - In Peter Adamson (ed.), Stanford Encyclopedia of Philosophy. Stanford Encyclopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  • Editorial: Risks of artificial intelligence.Vincent C. Müller - 2015 - In Risks of general intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and critically (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Dreyfus on the “Fringe”: information processing, intelligent activity, and the future of thinking machines.Jeffrey White - 2019 - AI and Society 34 (2):301-312.
    From his preliminary analysis in 1965, Hubert Dreyfus projected a future much different than those with which his contemporaries were practically concerned, tempering their optimism in realizing something like human intelligence through conventional methods. At that time, he advised that there was nothing “directly” to be done toward machines with human-like intelligence, and that practical research should aim at a symbiosis between human beings and computers with computers doing what they do best, processing discrete symbols in formally structured problem domains. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   32 citations  
  • The Role of Engineers in Harmonising Human Values for AI Systems Design.Steven Umbrello - 2022 - Journal of Responsible Technology 10 (July):100031.
    Most engineers Fwork within social structures governing and governed by a set of values that primarily emphasise economic concerns. The majority of innovations derive from these loci. Given the effects of these innovations on various communities, it is imperative that the values they embody are aligned with those societies. Like other transformative technologies, artificial intelligence systems can be designed by a single organisation but be diffused globally, demonstrating impacts over time. This paper argues that in order to design for this (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The epistemic challenge to longtermism.Christian Tarsney - 2023 - Synthese 201 (6):1-37.
    Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict — perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present actions is mainly determined by near-term considerations. This paper aims to precisify and evaluate one version of this epistemic objection to longtermism. To that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • “Blessed by the algorithm”: Theistic conceptions of artificial intelligence in online discourse.Beth Singler - 2020 - AI and Society 35 (4):945-955.
    “My first long haul flight that didn’t fill up and an empty row for me. I have been blessed by the algorithm ”. The phrase ‘blessed by the algorithm’ expresses the feeling of having been fortunate in what appears on your feed on various social media platforms, or in the success or virality of your content as a creator, or in what gig economy jobs you are offered. However, we can also place it within wider public discourse employing theistic conceptions (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Can the predictive processing model of the mind ameliorate the value-alignment problem?William Ratoff - 2021 - Ethics and Information Technology 23 (4):739-750.
    How do we ensure that future generally intelligent AI share our values? This is the value-alignment problem. It is a weighty matter. After all, if AI are neutral with respect to our wellbeing, or worse, actively hostile toward us, then they pose an existential threat to humanity. Some philosophers have argued that one important way in which we can mitigate this threat is to develop only AI that shares our values or that has values that ‘align with’ ours. However, there (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Will intelligent machines become moral patients?Parisa Moosavi - forthcoming - Philosophy and Phenomenological Research.
    This paper addresses a question about the moral status of Artificial Intelligence (AI): will AIs ever become moral patients? I argue that, while it is in principle possible for an intelligent machine to be a moral patient, there is no good reason to believe this will in fact happen. I start from the plausible assumption that traditional artifacts do not meet a minimal necessary condition of moral patiency: having a good of one's own. I then argue that intelligent machines are (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  • Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • An AGI Modifying Its Utility Function in Violation of the Strong Orthogonality Thesis.James D. Miller, Roman Yampolskiy & Olle Häggström - 2020 - Philosophies 5 (4):40.
    An artificial general intelligence (AGI) might have an instrumental drive to modify its utility function to improve its ability to cooperate, bargain, promise, threaten, and resist and engage in blackmail. Such an AGI would necessarily have a utility function that was at least partially observable and that was influenced by how other agents chose to interact with it. This instrumental drive would conflict with the strong orthogonality thesis since the modifications would be influenced by the AGI’s intelligence. AGIs in highly (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Concept of Rationality for a City.Kenny Easwaran - 2019 - Topoi 40 (2):409-421.
    The central aim of this paper is to argue that there is a meaningful sense in which a concept of rationality can apply to a city. The idea will be that a city is rational to the extent that the collective practices of its people enable diverse inhabitants to simultaneously live the kinds of life they are each trying to live. This has significant implications for the varieties of social practices that constitute being more or less rational. Some of these (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.
    We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   57 citations  
  • Modelos Din'micos Aplicados à Aprendizagem de Valores em Inteligência Artificial.Nicholas Kluge Corrêa & Nythamar De Oliveira - 2020 - Veritas – Revista de Filosofia da Pucrs 65 (2):e37439.
    Especialistas em desenvolvimento de Inteligência Artificial prevêem que o avanço no desenvolvimento de sistemas e agentes inteligentes irá remodelar áreas vitais em nossa sociedade. Contudo, se tal avanço não for realizado de maneira prudente e crítico-reflexiva, pode resultar em desfechos negativos para a humanidade. Por este motivo, diversos pesquisadores na área têm desenvolvido uma concepção de IA robusta, benéfica e segura para a preservação da humanidade e do meio-ambiente. Atualmente, diversos dos problemas em aberto no campo de pesquisa em IA (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Implementation of Moral Uncertainty in Intelligent Machines.Kyle Bogosian - 2017 - Minds and Machines 27 (4):591-608.
    The development of artificial intelligence will require systems of ethical decision making to be adapted for automatic computation. However, projects to implement moral reasoning in artificial moral agents so far have failed to satisfactorily address the widespread disagreement between competing approaches to moral philosophy. In this paper I argue that the proper response to this situation is to design machines to be fundamentally uncertain about morality. I describe a computational framework for doing so and show that it efficiently resolves common (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini - 2024 - Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • Racing to the precipice: a model of artificial intelligence development.Stuart Armstrong, Nick Bostrom & Carl Shulman - 2016 - AI and Society 31 (2):201-206.
  • Against the singularity hypothesis.David Thorstad - forthcoming - Philosophical Studies.
    The singularity hypothesis is a radical hypothesis about the future of artificial intelligence on which self-improving artificial agents will quickly become orders of magnitude more intelligent than the average human. Despite the ambitiousness of its claims, the singularity hypothesis has been defended at length by leading philosophers and artificial intelligence researchers. In this paper, I argue that the singularity hypothesis rests on scientifically implausible growth assumptions. I show how leading philosophical defenses of the singularity hypothesis (Chalmers 2010, Bostrom 2014) fail (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to (...)
    Direct download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The Epistemic Challenge to Longtermism.Christian Tarsney - manuscript
    Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict -- perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present actions is mainly determined by near-term considerations. This paper aims to precisify and evaluate one version of this epistemic objection to longtermism. To that (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Künstliche Intelligenz: Chancen und Risiken.Mannino Adriano, David Althaus, Jonathan Erhardt, Lukas Gloor, Adrian Hutter & Thomas Metzinger - 2015 - Diskussionspapiere der Stiftung Für Effektiven Altruismus 2:1-17.
    Die Übernahme des KI-Unternehmens DeepMind durch Google für rund eine halbe Milliarde US-Dollar signalisierte vor einem Jahr, dass von der KI-Forschung vielversprechende Ergebnisse erwartet werden. Spätestens seit bekannte Wissenschaftler wie Stephen Hawking und Unternehmer wie Elon Musk oder Bill Gates davor warnen, dass künstliche Intelligenz eine Bedrohung für die Menschheit darstellt, schlägt das KI-Thema hohe Wellen. Die Stiftung für Effektiven Altruismus (EAS, vormals GBS Schweiz) hat mit der Unterstützung von Experten/innen aus Informatik und KI ein umfassendes Diskussionspapier zu den Chancen (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Space colonization remains the only long-term option for humanity: A reply to Torres.Milan M. Cirkovic - 2019 - Futures 105:166-173.
    Recent discussion of the alleged adverse consequences of space colonization by Phil Torres in this journal is critically assessed. While the concern for suffering risks should be part of any strategic discussion of the cosmic future of humanity, the Hobbesian picture painted by Torres is largely flawed and unpersuasive. Instead, there is a very real risk that the skeptical arguments will be taken too seriously and future human flourishing in space delayed or prevented.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • An offer you can't refuse: systematically exploiting utility-maximisers with malicious gambles.Chalmers Adam - unknown
    Decision theory aims to provide mathematical analysis of which choice one should rationally make in a given situation. Our current decision theory norms have been very successful, however, several problems have proven vexing for standard decision theory. In this paper, I show that these problems all share a similar structure and identify a class of problems which decision theory overvalues. I demonstrate that agents who follow current standard decision theory can be exploited and have their preferences reordered if offered decision (...)
     
    Export citation  
     
    Bookmark   1 citation  
  • Extraterrestrial artificial intelligences and humanity's cosmic future: Answering the Fermi paradox through the construction of a Bracewell-Von Neumann AGI.Tomislav Miletić - 2015 - Journal of Evolution and Technology 25 (1):56-73.
    A probable solution of the Fermi paradox; and a necessary step in humanity’s cosmic development; is the construction of a Bracewell-Von Neumann Artificial General Intelligence. The use of BN probes is the most plausible method of initial galactic exploration and communication for advanced ET civilizations; and our own cosmic evolution lies firmly in the utilization of; and cooperation with; AGI agents. To establish these claims; I explore the most credible developmental path from carbon-based life forms to planetary civilizations and AI (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Agential Risks: A Comprehensive Introduction.Phil Torres - 2016 - Journal of Evolution and Technology 26 (2):31-47.
    The greatest existential threats to humanity stem from increasingly powerful advanced technologies. Yet the “risk potential” of such tools can only be realized when coupled with a suitable agent who; through error or terror; could use the tool to bring about an existential catastrophe. While the existential risk literature has provided many accounts of how advanced technologies might be misused and abused to cause unprecedented harm; no scholar has yet explored the other half of the agent-tool coupling; namely the agent. (...)
    No categories
     
    Export citation  
     
    Bookmark   5 citations