About this topic
Summary Climate change, nuclear war, pandemics, asteroid impacts, rogue AI--each has been claimed to pose an existential risk. Loosely, an existential risk is a risk of human extinction or of a similarly severe loss of future moral value.
Key works For an extended discussion of particular existential risks and their likelihood, see Ord's The Precipice. For a case in favour of mitigating existential risks, see Bostrom's "Existential risk prevention as global priority". For discussion of how best to define existential risks, see Greaves' "Concepts of existential catastrophe.
Related

Contents
73 found
Order:
1 — 50 / 73
  1. Probabilities, Methodologies and the Evidence Base in Existential Risk Assessments.Thomas Rowe & Simon Beard - manuscript
    This paper examines and evaluates a range of methodologies that have been proposed for making useful claims about the probability of phenomena that would contribute to existential risk. Section One provides a brief discussion of the nature of such claims, the contexts in which they tend to be made and the kinds of probability that they can contain. Section Two provides an overview of the methodologies that have been developed to arrive at these probabilities and assesses their advantages and disadvantages. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2. Probabilities, Methodologies and the Evidence Base in Existential Risk Assessments.Thomas Rowe & Simon Beard - manuscript
    This paper examines and evaluates a range of methodologies that have been proposed for making useful claims about the probability of phenomena that would contribute to existential risk. Section One provides a brief discussion of the nature of such claims, the contexts in which they tend to be made and the kinds of probability that they can contain. Section Two provides an overview of the methodologies that have been developed to arrive at these probabilities and assesses their advantages and disadvantages. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Existential risk pessimism and the time of perils.David Thorstad - manuscript
    When our choice affects some other person and the outcome is unknown, it has been argued that we should defer to their risk attitude, if known, or else default to use of a risk avoidant risk function. This, in turn, has been claimed to require the use of a risk avoidant risk function when making decisions that primarily affect future people, and to decrease the desirability of efforts to prevent human extinction, owing to the significant risks associated with continued human (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4. The Vulnerable World Hypothesis.Nick Bostrom - 2018
    Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   13 citations  
  5. Is Extinction Risk Mitigation Uniquely Cost-Effective? Not in Standard Population Models.Gustav Alexandrie & Maya Eden - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    What socially beneficial causes should philanthropists prioritize if they give equal ethical weight to the welfare of current and future generations? Many have argued that, because human extinction would result in a permanent loss of all future generations, extinction risk mitigation should be the top priority given this impartial stance. Using standard models of population dynamics, we challenge this conclusion. We first introduce a theoretical framework for quantifying undiscounted cost-effectiveness over the long term. We then show that standard population models (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  6. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Discounting, Buck-Passing, and Existential Risk Mitigation: The Case of Space Colonization.Joseph Gottlieb - forthcoming - Space Policy.
    Large-scale, self-sufficient space colonization is a plausible means of efficiently reducing existential risks and ensuring our long-term survival. But humanity is by and large myopic, and as an intergenerational global public good, existential risk reduction is systematically undervalued, hampered by intergenerational discounting. This paper explores how these issues apply to space colonization, arguing that the motivational and psychological barriers to space colonization are a special—and especially strong—case of a more general problem. The upshot is not that large-scale, self-sufficient space colonization (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. The Fragile World Hypothesis: Complexity, Fragility, and Systemic Existential Risk.David Manheim - forthcoming - Futures.
    The possibility of social and technological collapse has been the focus of science fiction tropes for decades, but more recent focus has been on specific sources of existential and global catastrophic risk. Because these scenarios are simple to understand and envision, they receive more attention than risks due to complex interplay of failures, or risks that cannot be clearly specified. In this paper, we discuss the possibility that complexity of a certain type leads to fragility which can function as a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9. Respect for others' risk attitudes and the long‐run future.Andreas L. Mogensen - forthcoming - Noûs.
    When our choice affects some other person and the outcome is unknown, it has been argued that we should defer to their risk attitude, if known, or else default to use of a risk‐avoidant risk function. This, in turn, has been claimed to require the use of a risk‐avoidant risk function when making decisions that primarily affect future people, and to decrease the desirability of efforts to prevent human extinction, owing to the significant risks associated with continued human survival. I (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10. How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role.Carl Shulman & Elliott Thornley - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on longtermism. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  11. Longtermism and social risk-taking.H. Orri Stefánsson - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    A social planner who evaluates risky public policies in light of the other risks with which their society will be faced should judge favourably some such policies even though they would deem them too risky when considered in isolation. I suggest that a longtermist would—or at least should—evaluate risky polices in light of their prediction about future risks; hence, longtermism supports social risk-taking. I consider two formal versions of this argument, discuss the conditions needed for the argument to be valid, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. Mistakes in the moral mathematics of existential risk.David Thorstad - forthcoming - Ethics.
    Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to threaten the normative case for existential risk mitigation. I use this discussion to draw four positive lessons for the study of existential risk. -/- (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13. Existential Risk, Astronomical Waste, and the Reasonableness of a Pure Time Preference for Well-Being.S. J. Beard & Patrick Kaczmarek - 2024 - The Monist 107 (2):157-175.
    In this paper, we argue that our moral concern for future well-being should reduce over time due to important practical considerations about how humans interact with spacetime. After surveying several of these considerations (around equality, special duties, existential contingency, and overlapping moral concern) we develop a set of core principles that can both explain their moral significance and highlight why this is inherently bound up with our relationship with spacetime. These relate to the equitable distribution of (1) moral concern in (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14. The argument for near-term human disempowerment through AI.Leonard Dung - 2024 - AI and Society:1-14.
    Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment of humanity, e.g. human extinction, by 2100. It rests on four substantive premises which it motivates and defends: first, the speed of advances in AI capability, as well as the capability level current systems have already reached, suggest that it is practically possible to build AI systems (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Concepts of Existential Catastrophe.Hilary Greaves - 2024 - The Monist 107 (2):109-129.
    The notion of existential catastrophe is increasingly appealed to in discussion of risk management around emerging technologies, but it is not completely clear what this notion amounts to. Here, I provide an opinionated survey of the space of plausibly useful definitions of existential catastrophe. Inter alia, I discuss: whether to define existential catastrophe in ex post or ex ante terms, whether an ex ante definition should be in terms of loss of expected value or loss of potential, and what kind (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  16. Should longtermists recommend hastening extinction rather than delaying it?Richard Pettigrew - 2024 - The Monist 107 (2):130-145.
    Longtermism is the view that the most urgent global priorities, and those to which we should devote the largest portion of our resources, are those that focus on (i) ensuring a long future for humanity, and perhaps sentient or intelligent life more generally, and (ii) improving the quality of the lives that inhabit that long future. While it is by no means the only one, the argument most commonly given for this conclusion is that these interventions have greater expected goodness (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  17. Hope in an Illiberal Age? [REVIEW]Mark R. Reiff - 2024 - Ethics, Policy and Environment 2024 (January):1-9.
    In this commentary on Darrel Moellendorf’s Mobilizing Hope: Climate Change & Global Poverty (Oxford: Oxford University Press, 2022), I discuss his use of the precautionary principle, whether his hope for climate-friendly ‘green growth’ is realistic given the tendency for inequality to accelerate as it gets higher, and what I call his assumption of a liberal baseline. That is, I worry that the audience to whom the book is addressed are those who already accept the environmental and economic values to which (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  18. Economic inequality and the long-term future.Andreas T. Schmidt & Daan Juijn - 2024 - Politics, Philosophy and Economics 23 (1):67-99.
    Why, if at all, should we object to economic inequality? Some central arguments – the argument from decreasing marginal utility for example – invoke instrumental reasons and object to inequality because of its effects. Such instrumental arguments, however, often concern only the static effects of inequality and neglect its intertemporal consequences. In this article, we address this striking gap and investigate income inequality's intertemporal consequences, including its potential effects on humanity's (very) long-term future. Following recent arguments around future generations and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19. Protecting Future Generations by Enhancing Current Generations.Parker Crutchfield - 2023 - In Fabrice Jotterand & Marcello Ienca (eds.), The Routledge Handbook of the Ethics of Human Enhancement. Routledge.
    It is plausible that current generations owe something to future generations. One possibility is that we have a duty to not harm them. Another possibility is that we have a duty to protect them. In either case, however, to satisfy the duties to future generations from environmental or political degradation, we need to engage in widespread collective action. But, as we are, we have a limited ability to do so, in part because we lack the self-discipline necessary for successful collective (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  20. Welcome to the Machine: AI, Existential Risk, and the Iron Cage of Modernity.Jay A. Gupta - 2023 - Telos: Critical Theory of the Contemporary 2023 (203):163-169.
    ExcerptRecent advances in the functional power of artificial intelligence (AI) have prompted an urgent warning from industry leaders and researchers concerning its “profound risks to society and humanity.”1 Their open letter is admirable not only for its succinct identification of said risks, which include the mass dissemination of misinformation, loss of jobs, and even the possible extinction of our species, but also for its clear normative framing of the problem: “Should we let machines flood our information channels with propaganda and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21. Unfinished Business.Jonathan Knutzen - 2023 - Philosophers' Imprint 23 (1): 4, 1-15.
    According to an intriguing though somewhat enigmatic line of thought first proposed by Jonathan Bennett, if humanity went extinct any time soon this would be unfortunate because important business would be left unfinished. This line of thought remains largely unexplored. I offer an interpretation of the idea that captures its intuitive appeal, is consistent with plausible constraints, and makes it non-redundant to other views in the literature. The resulting view contrasts with a welfare-promotion perspective, according to which extinction would be (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  22. Book Review "Thomas Moynihan: X-Risk: How Humanity Discovered its Own Extinction". [REVIEW]Kritika Maheshwari - 2023 - Intergenerational Justice Review 8 (2):61-62.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  23. Diving to extinction: Water birds at risk.Minh-Hoang Nguyen - 2023 - Sm3D Portal.
    Our Earth’s climate is changing. Any species living in the Earth’s ecosystem need to thrive to adapt to the new living conditions. Otherwise, extinction will be their outcome. In the race for adaptation, waterbirds (Aequorlitornithes), such as penguins, cormorants, and alcids, seem disadvantageous.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  24. The Myth of “Just” Nuclear Deterrence: Time for a New Strategy to Protect Humanity from Existential Nuclear Risk.Joan Rohlfing - 2023 - Ethics and International Affairs 37 (1):39-49.
    Nuclear weapons are different from every other type of weapons technology. Their awesome destructive potential and the unparalleled consequences of their use oblige us to think critically about the ethics of nuclear possession, planning, and use. Joe Nye has given the ethics of nuclear weapons deep consideration. He posits that we have a basic moral obligation to future generations to preserve roughly equal access to important values, including equal chances of survival, and proposes criteria for achieving conditional or “just deterrence” (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25. Economic inequality and the long-term future.Andreas T. Schmidt & Daan Juijn - 2023 - Politics, Philosophy and Economics.
    Why, if at all, should we object to economic inequality? Some central arguments – the argument from decreasing marginal utility for example – invoke instrumental reasons and object to inequality because of its effects. Such instrumental arguments, however, often concern only the static effects of inequality and neglect its intertemporal conse- quences. In this article, we address this striking gap and investigate income inequality’s intertemporal consequences, including its potential effects on humanity’s (very) long-term future. Following recent arguments around future generations (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26. The Precipice: Existential Risk and the Future of Humanity. By Toby Ord. [REVIEW]Daniel John Sportiello - 2023 - American Catholic Philosophical Quarterly 97 (1):147-150.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27. The epistemic challenge to longtermism.Christian Tarsney - 2023 - Synthese 201 (6):1-37.
    Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict — perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present actions is mainly determined by near-term considerations. This paper aims to precisify and evaluate one version of this epistemic objection to longtermism. To that (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  28. High Risk, Low Reward: A Challenge to the Astronomical Value of Existential Risk Mitigation.David Thorstad - 2023 - Philosophy and Public Affairs 51 (4):373-412.
    Philosophy &Public Affairs, Volume 51, Issue 4, Page 373-412, Fall 2023.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  29. How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an existential (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Human Extinction and Moral Worthwhileness.Elizabeth Finneron-Burns - 2022 - Utilitas 34 (1):105-112.
    In this article I make two main critiques of Kaczmarek and Beard's article ‘Human Extinction and Our Obligations to the Past’. First, I argue that there is an ambiguity in what it means to realise the benefits of a sacrifice and that this ambiguity affects the persuasiveness of the authors’ arguments and responses to various objections to their view. Second, I argue that their core argument against human extinction depends on an unsupported assumption about the existence and importance of existential (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31. The precipice: Existential risk and the future of humanity. Ord, Toby. New York: Hachette, 2020. 468 pp. US$30. ISBN 9780316484916 (Hardback). [REVIEW]David Heyd - 2022 - Bioethics 36 (9):1001-1002.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  32. What We Owe the Future: A Million-Year View.William MacAskill - 2022 - Basic Books.
    A guide for making the future go better. Humanity’s written history spans only five thousand years. Our yet-unwritten future could last for millions more – or it could end tomorrow. Staggering numbers of people will lead lives of flourishing or misery or never live at all, depending on what we do today.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33. The Worst Case: Planetary Defense against a Doomsday Impactor.Joel Marks - 2022 - Space Policy 61.
    Current planetary defense policy prioritizes a probability assessment of risk of Earth impact by an asteroid or a comet in the planning of detection and mitigation strategies and in setting the levels of urgency and budgeting to operationalize them. The result has been a focus on asteroids of Tunguska size, which could destroy a city or a region, since this is the most likely sort of object we would need to defend against. However a complete risk assessment would consider not (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34. Nuclear war as a predictable surprise.Matthew Rendall - 2022 - Global Policy 13 (5):782-791.
    Like asteroids, hundred-year floods and pandemic disease, thermonuclear war is a low-frequency, high-impact threat. In the long run, catastrophe is inevitable if nothing is done − yet each successive government and generation may fail to address it. Drawing on risk perception research, this paper argues that psychological biases cause the threat of nuclear war to receive less attention than it deserves. Nuclear deterrence is, moreover, a ‘front-loaded good’: its benefits accrue disproportionately to proximate generations, whereas much of the expected cost (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  35. Interplanetary Expansion and the Deep Future.Margarida Hermida - 2021-10-12 - In Jeffery L. Nicholas (ed.), The Expanse and Philosophy. Wiley. pp. 13–24.
    In The Expanse, the future of humanity is constantly at stake. In The Expanse vestiges of an ancient alien civilization with incredibly advanced technology have been found—which eventually permits human interstellar expansion through the gates. James Lenman argues that, even if we agree that biodiversity is a good thing, it only means that it's good that there should be natural diversity while life exists on Earth. While we might not be facing interplanetary war or the unpredictable consequences of ancient alien (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  36. COVID-19 PANDEMIC AS AN INDICATOR OF EXISTENTIAL EVOLUTIONARY RISK OF ANTHROPOCENE (ANTHROPOLOGICAL ORIGIN AND GLOBAL POLITICAL MECHANISMS).Valentin Cheshko & Konnova Nina - 2021 - In MOChashin O. Kristal (ed.), Bioethics: from theory to practice. pp. 29-44.
    The coronavirus pandemic, like its predecessors - AIDS, Ebola, etc., is evidence of the evolutionary instability of the socio-cultural and ecological niche created by mankind, as the main factor in the evolutionary success of our biological species and the civilization created by it. At least, this applies to the modern global civilization, which is called technogenic or technological, although it exists in several varieties. As we hope to show, the current crisis has less ontological as well as epistemological roots; its (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  38. Toby Ord, The Precipice: Existential Risk and the Future of Humanity, Bloomsbury, 2020.Benedikt Namdar & Thomas Pölzler - 2021 - Ethical Theory and Moral Practice 24 (3):855-857.
  39. Human Extinction from a Thomist Perspective.Stefan Riedener - 2021 - In Stefan Riedener, Dominic Roser & Markus Huppenbauer (eds.), Effective Altruism and Religion: Synergies, Tensions, Dialogue. Baden-Baden, Germany: Nomos. pp. 187-210.
    “Existential risks” are risks that threaten the destruction of humanity’s long-term potential: risks of nuclear wars, pandemics, supervolcano eruptions, and so on. On standard utilitarianism, it seems, the reduction of such risks should be a key global priority today. Many effective altruists agree with this verdict. But how should the importance of these risks be assessed on a Christian moral theory? In this paper, I begin to answer this question – taking Thomas Aquinas as a reference, and the risks of (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40. The Precipice – Existential Risk and the Future of Humanity. Toby Ord, 2020 London, Bloomsbury Publishing. 480 pp, £22.50. [REVIEW]Martin Sand - 2021 - Journal of Applied Philosophy 38 (4):722-724.
    Journal of Applied Philosophy, EarlyView.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41. Human Extinction and Our Obligations to the Past.Patrick Kaczmarek & Simon Beard - 2020 - Utilitas 32 (2):199-208.
    On certain plausible views, if humanity were to unanimously decide to cause its own extinction, this would not be wrong, since there is no one whom this act would wrong. We argue this is incorrect. Causing human extinction would still wrong someone; namely, our forebears who sacrificed life, limb and livelihood for the good of posterity, and whose sacrifices would be made less morally worthwhile by this heinous act.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  42. If now isn't the most influential time ever, when is? [REVIEW]Kritika Maheshwari - 2020 - The Philosopher 108:94-101.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  43. The Doomsday Argument Reconsidered.Jon Mills - 2020 - Eidos. A Journal for Philosophy of Culture 4 (3):113-127.
    In our current unstable world, nuclear warfare, climate crises, and techno nihilism are three perilous clouds hovering over an anxious humanity. In this article I examine our current state of affairs with regard to the imminent risk of nuclear holocaust, rapid climate emergencies destroying the planet, and the cultural and political consequences of emerging technologies on the fate of civilization. In the wake of innumerable existential threats to the future of our world, I revisit the plausibility of the Doomsday Argument, (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  44. The role of experts in the public perception of risk of artificial intelligence.Hugo Neri & Fabio Cozman - 2020 - AI and Society 35 (3):663-673.
    The goal of this paper is to describe the mechanism of the public perception of risk of artificial intelligence. For that we apply the social amplification of risk framework to the public perception of artificial intelligence using data collected from Twitter from 2007 to 2018. We analyzed when and how there appeared a significant representation of the association between risk and artificial intelligence in the public awareness of artificial intelligence. A significant finding is that the image of the risk of (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  45. The Precipice: Existential Risk and the Future of Humanity.Toby Ord - 2020 - London: Bloomsbury.
    Humanity stands at a precipice. -/- Our species could survive for millions of generations — enough time to end disease, poverty, and injustice; to reach new heights of flourishing. But this vast future is at risk. With the advent of nuclear weapons, humanity entered a new age, gaining the power to destroy ourselves, without the wisdom to ensure we won’t. Since then, these dangers have only multiplied, from climate change to engineered pandemics and unaligned artificial intelligence. If we do not (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   62 citations  
  46. Toby Ord, The Precipice. Existential Risk and the Future of Humanity. Bloomsbury Publishing, 2020, 480 s.Markéta Poledníková - 2020 - Pro-Fil 21 (1):91.
    Recenze knihy:Toby Ord, The Precipice. Existential Risk and the Future of Humanity. Bloomsbury Publishing, 2020, 480 s.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  47. Review of The Precipice: Existential Risk and the Future of Humanity. [REVIEW]Theron Pummer - 2020 - Notre Dame Philosophical Reviews 8.
  48. Autonomy and Machine Learning as Risk Factors at the Interface of Nuclear Weapons, Computers and People.S. M. Amadae & Shahar Avin - 2019 - In Vincent Boulanin (ed.), The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk: Euro-Atlantic Perspectives. Stockholm, Sweden: pp. 105-118.
    This article assesses how autonomy and machine learning impact the existential risk of nuclear war. It situates the problem of cyber security, which proceeds by stealth, within the larger context of nuclear deterrence, which is effective when it functions with transparency and credibility. Cyber vulnerabilities poses new weaknesses to the strategic stability provided by nuclear deterrence. This article offers best practices for the use of computer and information technologies integrated into nuclear weapons systems. Focusing on nuclear command and control, avoiding (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  49. The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk: Euro-Atlantic Perspectives.Vincent Boulanin (ed.) - 2019 - Stockholm: SIPRI.
    This edited volume focuses on the impact on artificial intelligence (AI) on nuclear strategy. It is the first instalment of a trilogy that explores regional perspectives and trends related to the impact that recent advances in AI could have nuclear weapons and doctrines, strategic stability and nuclear risk. It assembles the views of 14 experts from the Euro-Atlantic community on why and how machine learning and autonomy might become the focus of an armed race among nuclear-armed states; and how the (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark   1 citation  
  50. Space Colonization and Existential Risk.Joseph Gottlieb - 2019 - Journal of the American Philosophical Association 5 (3):306-320.
    Ian Stoner has recently argued that we ought not to colonize Mars because doing so would flout our pro tanto obligation not to violate the principle of scientific conservation, and there is no countervailing considerations that render our violation of the principle permissible. While I remain agnostic on, my primary goal in this article is to challenge : there are countervailing considerations that render our violation of the principle permissible. As such, Stoner has failed to establish that we ought not (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
1 — 50 / 73