The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of (...) our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence. (shrink)
There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...) designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
Cognitive enhancement takes many and diverse forms. Various methods of cognitive enhancement have implications for the near future. At the same time, these technologies raise a range of ethical issues. For example, they interact with notions of authenticity, the good life, and the role of medicine in our lives. Present and anticipated methods for cognitive enhancement also create challenges for public policy and regulation.
To what extent should we use technological advances to try to make better human beings? Leading philosophers debate the possibility of enhancing human cognition, mood, personality, and physical performance, and controlling aging. Would this take us beyond the bounds of human nature? These are questions that need to be answered now.
I argue that at least one of the following propositions is true: the human species is very likely to become extinct before reaching a ’posthuman’ stage; any posthuman civilization is extremely unlikely to run a significant number of simulations of its evolutionary history ; we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we shall one day become posthumans who run ancestor-simulations is false, unless we are currently living (...) in a simulation. I discuss some consequences of this result. (shrink)
_Anthropic Bias_ explores how to reason when you suspect that your evidence is biased by "observation selection effects"--that is, evidence that has been filtered by the precondition that there be some suitably positioned observer to "have" the evidence. This conundrum--sometimes alluded to as "the anthropic principle," "self-locating belief," or "indexical information"--turns out to be a surprisingly perplexing and intellectually stimulating challenge, one abounding with important implications for many areas in science and philosophy. There are the philosophical thought experiments and paradoxes: (...) the Doomsday Argument; Sleeping Beauty; the Presumptuous Philosopher; Adam & Eve; the Absent-Minded Driver; the Shooting Room. And there are the applications in contemporary science: cosmology ; evolutionary theory ; the problem of time's arrow ; quantum physics ; game-theory problems with imperfect recall ; even traffic analysis. _Anthropic Bias_ argues that the same principles are at work across all these domains. And it offers a synthesis: a mathematically explicit theory of observation selection effects that attempts to meet scientific needs while steering clear of philosophical paradox. (shrink)
_Anthropic Bias_ explores how to reason when you suspect that your evidence is biased by "observation selection effects"--that is, evidence that has been filtered by the precondition that there be some suitably positioned observer to "have" the evidence. This conundrum--sometimes alluded to as "the anthropic principle," "self-locating belief," or "indexical information"--turns out to be a surprisingly perplexing and intellectually stimulating challenge, one abounding with important implications for many areas in science and philosophy. There are the philosophical thought experiments and paradoxes: (...) the Doomsday Argument; Sleeping Beauty; the Presumptuous Philosopher; Adam & Eve; the Absent-Minded Driver; the Shooting Room. And there are the applications in contemporary science: cosmology ; evolutionary theory ; the problem of time's arrow ; quantum physics ; game-theory problems with imperfect recall ; even traffic analysis. _Anthropic Bias_ argues that the same principles are at work across all these domains. And it offers a synthesis: a mathematically explicit theory of observation selection effects that attempts to meet scientific needs while steering clear of philosophical paradox. (shrink)
This paper argues that at least one of the following propositions is true: the human species is very likely to go extinct before reaching a "posthuman" stage; any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history ; we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently (...) living in a simulation. A number of other consequences of this result are also discussed. (shrink)
Positions on the ethics of human enhancement technologies can be (crudely) characterized as ranging from transhumanism to bioconservatism. Transhumanists believe that human enhancement technologies should be made widely available, that individuals should have broad discretion over which of these technologies to apply to themselves, and that parents should normally have the right to choose enhancements for their children-to-be. Bioconservatives (whose ranks include such diverse writers as Leon Kass, Francis Fukuyama, George Annas, Wesley Smith, Jeremy Rifkin, and Bill McKibben) are generally (...) opposed to the use of technology to modify human nature. A central idea in bioconservativism is that human enhancement technologies will undermine our human dignity. To forestall a slide down the slippery slope towards an ultimately debased ‘posthuman’ state, bioconservatives often argue for broad bans on otherwise promising human enhancements. This paper distinguishes two common fears about the posthuman and argues for the importance of a concept of dignity that is inclusive enough to also apply to many possible posthuman beings. Recognizing the possibility of posthuman dignity undercuts an important objection against human enhancement and removes a distortive double standard from our field of moral vision. (shrink)
The human desire to acquire new capacities is as ancient as our species itself. We have always sought to expand the boundaries of our existence, be it socially, geographically, or mentally. There is a tendency in at least some individuals always to search for a way around every obstacle and limitation to human life and happiness.
Suppose that we develop a medically safe and affordable means of enhancing human intelligence. For concreteness, we shall assume that the technology is genetic engineering (either somatic or germ line), although the argument we will present does not depend on the technological implementation. For simplicity, we shall speak of enhancing “intelligence” or “cognitive capacity,” but we do not presuppose that intelligence is best conceived of as a unitary attribute. Our considerations could be applied to specific cognitive abilities such as verbal (...) fluency, memory, abstract reasoning, social intelligence, spatial cognition, numerical ability, or musical talent. It will emerge that the form of argument that we use can be applied much more generally to help assess other kinds of enhancement technologies as well as other kinds of reform. However, to give a detailed illustration of how the argument form works, we will focus on the prospect of cognitive enhancement. (shrink)
Transhumanism is a loosely defined movement that has developed gradually over the past two decades. It promotes an interdisciplinary approach to understanding and evaluating the opportunities for enhancing the human condition and the human organism opened up by the advancement of technology. Attention is given to both present technologies, like genetic engineering and information technology, and anticipated future ones, such as molecular nanotechnology and artificial intelligence.
The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive (...) pursuit, a superintelligence could also easily surpass humans in the quality of its moral thinking. However, it would be up to the designers of the superintelligence to specify its original motivations. Since the superintelligence may become unstoppably powerful because of its intellectual superiority and the technologies it could develop, it is crucial that it be provided with human-friendly motivations. This paper surveys some of the unique ethical issues in creating superintelligence, and discusses what motivations we ought to give a superintelligence, and introduces some cost-benefit considerations relating to whether the development of superintelligent machines ought to be accelerated or retarded. (shrink)
Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well-known threats such as nuclear holocaust, the propects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics from (...) a human to a "posthuman" society is needed. Of particular importance is to know where the pitfalls are: the ways in which things could go terminally wrong. While we have had long exposure to various personal, local, and endurable global hazards, this paper analyzes a recently emerging category: that of existential risks. These are threats that could case our extinction or destroy the potential of Earth - originating intelligent life. Some of these threats are relatively well known while others, including some of the gravest, have gone almost unrecognized. Existential risks have a cluster of features that make ordinary risk management ineffective. A final section of this paper discusses several ethical and policy implications. A clearer understanding of the threat picture will enable us to formulate better strategies. (shrink)
Transhumanism is a loosely defined movement that has developed gradually over the past two decades. [1] It promotes an interdisciplinary approach to understanding and evaluating the opportunities for enhancing the human condition and the human organism opened up by the advancement of technology. Attention is given to both present technologies, like genetic engineering and information technology, and anticipated future ones, such as molecular nanotechnology and artificial intelligence.
Extreme human enhancement could result in “posthuman” modes of being. After offering some definitions and conceptual clarification, I argue for two theses. First, some posthuman modes of being would be very worthwhile. Second, it could be very good for human beings to become posthuman.
Human enhancement has emerged in recent years as a blossoming topic in applied ethics. With continuing advances in science and technology, people are beginning to realize that some of the basic parameters of the human condition might be changed in the future. One important way in which the human condition could be changed is through the enhancement of basic human capacities. If this becomes feasible within the lifespan of many people alive today, then it is important now to consider the (...) normative questions raised by such prospects. The answers to these questions might not only help us be better prepared when technology catches up with imagination, but they may be relevant to many decisions we make today, such as decisions about how much funding to give to various kinds of research. Enhancement is typically contraposed to therapy. In broad terms, therapy aims to fix something that has gone wrong, by curing specific diseases or injuries, while enhancement interventions aim to improve the state of an organism beyond its normal healthy state. However, the distinction between therapy and enhancement is problematic, for several reasons. First, we may note that the therapy-enhancement dichotomy does not map onto any corresponding dichotomy between standard-contemporary-medicine and medicineas-it-could-be-practised-in-the-future. Standard contemporary medicine includes many practices that do not aim to cure diseases or injuries. It includes, for example, preventive medicine, palliative care, obstetrics, sports medicine, plastic surgery, contraceptive devices, fertility treatments, cosmetic dental procedures, and much else. At the same time, many enhancement interventions occur outside of the medical framework. Office workers enhance their performance by drinking coffee. Make-up and grooming are used to enhance appearance. Exercise, meditation, fish oil, and St John’s Wort are used to enhance mood. Second, it is unclear how to classify interventions that reduce the probability of disease and death.. (shrink)
Positions on the ethics of human enhancement technologies can be (crudely) characterized as ranging from transhumanism to bioconservatism. Transhumanists believe that human enhancement technologies should be made widely available, that individuals should have broad discretion over which of these technologies to apply to themselves, and that parents should normally have the right to choose enhancements for their children-to-be. Bioconservatives (whose ranks include such diverse writers as Leon Kass, Francis Fukuyama, George Annas, Wesley Smith, Jeremy Rifkin, and Bill McKibben) are generally (...) opposed to the use of technology to modify human nature. A central idea in bioconservativism is that human enhancement technologies will undermine our human dignity. To forestall a slide down the slippery slope towards an ultimately debased ‘posthuman’ state, bioconservatives often argue for broad bans on otherwise promising human enhancements. This paper distinguishes two common fears about the posthuman and argues for the importance of a concept of dignity that is inclusive enough to also apply to many possible posthuman beings. Recognizing the possibility of posthuman dignity undercuts an important objection against human enhancement and removes a distortive double standard from our field of moral vision. (shrink)
The Sleeping Beauty problem is test stone for theories about self-locating belief, i.e. theories about how we should reasons when data or theories contain indexical information. Opinion on this problem is split between two camps, those who defend the "1/2 view" and those who advocate the "1/3 view". I argue that both these positions are mistaken. Instead, I propose a new "hybrid" model, which avoids the faults of the standard views while retaining their attractive properties. This model _appears_ to violate (...) Bayesian conditionalization, but I argue that this is not the case. By paying close attention to the details of conditionalization in contexts where indexical information is relevant, we discover that the hybrid model is in fact consistent with Bayesian kinematics. If the proposed model is correct, there are important lessons for the study of self-location, observation selection theory, and anthropic reasoning. (shrink)
This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having (...) any of a wide range of final goals will pursue similar intermediary goals because they have instrumental reasons to do so. In combination, the two theses help us understand the possible range of behavior of superintelligent agents, and they point to some potential dangers in building such an agent. (shrink)
Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: (...) roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the “semi-anarchic default condition”. Several counterfactual historical and speculative future vulnerabilities are analyzed and arranged into a typology. A general ability to stabilize a vulnerable world would require greatly amplified capacities for preventive policing and global governance. The vulnerable world hypothesis thus offers a new perspective from which to evaluate the risk-benefit balance of developments towards ubiquitous surveillance or a unipolar world order. (shrink)
_This paper outlines the case for believing that we will have superhuman artificial intelligence_ _within the first third of the next century. It looks at different estimates of the processing power of_ _the human brain; how long it will take until computer hardware achieve a similar performance;_ _ways of creating the software through bottom-up approaches like the one used by biological_ _brains; how difficult it will be for neuroscience figure out enough about how brains work to_ _make this approach work; (...) and how fast we can expect superintelligence to be developed once_ _there is human-level artificial intelligence._. (shrink)
With very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe. For every year that development of such technologies and colonization of the universe is delayed, there is therefore a corresponding opportunity cost: a potential good, lives worth living, is not being realized. Given some plausible assumptions, this cost is extremely large. However, the lesson for standard utilitarians is not that we ought to maximize the pace of technological (...) development, but rather that we ought to maximize its safety, i.e. the probability that colonization will eventually occur. This goal has such high utility that standard utilitarians ought to focus all their efforts on it. Utilitarians of a ‘person-affecting’ stripe should accept a modified version of this conclusion. Some mixed ethical views, which combine utilitarian considerations with other criteria, will also be committed to a similar bottom line. (shrink)
Cognitive enhancement may be defined as the amplification or extension of core capacities of the mind through improvement or augmentation of internal or external information processing systems. Cognition refers to the processes an organism uses to organize information. These include acquiring information (perception), selecting (attention), representing (understanding) and retaining (memory) information, and using it to guide behavior (reasoning and coordination of motor outputs). Interventions to improve cognitive function may be directed at any of these core faculties.
There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...) the world except by answering questions. Even this narrow approach presents considerable challenges. In this paper, we analyse and critique various methods of controlling the AI. In general an Oracle AI might be safer than unrestricted AI, but still remains potentially dangerous. (shrink)
Does human enhancement threaten our dignity as some prominent commentators have asserted? Or could our dignity perhaps be technologically enhanced? After disentangling several different concepts of dignity, this essay focuses on the idea of dignity as a quality, a kind of excellence admitting of degrees and applicable to entities both within and without the human realm. I argue that dignity in this sense interacts with enhancement in complex ways which bring to light some fundamental issues in value theory, and that (...) the effects of any given enhancement must be evaluated in its appropriate empirical context. Yet it is possible that through enhancement we could become better able to appreciate and secure many forms of dignity that are overlooked or missing under current conditions. I also suggest that in a posthuman world, dignity as a quality could grow in importance as an organizing moral/aesthetic idea. (shrink)
In some dark alley. . . Mugger: Hey, give me your wallet. Pascal: Why on Earth would I want to do that? Mugger: Otherwise I’ll shoot you. Pascal: But you don’t have a gun. Mugger: Oops! I knew I had forgotten something. Pascal: No wallet for you then. Have a nice evening. Mugger: Wait! Pascal: Sigh. Mugger: I’ve got a business proposition for you. . . . How about you give me your wallet now? In return, I promise to come (...) to your house tomorrow and give you double the value of what’s in the wallet. Not bad, eh? A 200% return on investment in 24 hours. (shrink)
A Global Catastrophic Risk is one that has the potential to inflict serious damage to human well-being on a global scale. This book focuses on such risks arising from natural catastrophes, nuclear war, terrorism, biological weapons, totalitarianism, advanced nanotechnology, artificial intelligence and social collapse.
Human beings are a marvel of evolved complexity. Such systems can be difficult to enhance. When we manipulate complex evolved systems, which are poorly understood, our interventions often fail or backfire. It can appear as if there is a “wisdom of nature” which we ignore at our peril. Sometimes the belief in nature’s wisdom—and corresponding doubts about the prudence of tampering with nature, especially human nature—manifests as diffusely moral objections against enhancement. Such objections may be expressed as intuitions about the (...) superiority of the natural or the troublesomeness of hubris or as an evaluative bias in favor of the status quo. This chapter explores the extent to which such prudence-derived anti-enhancement sentiments are justified. We develop a heuristic, inspired by the field of evolutionary medicine, for identifying promising human enhancement interventions. The heuristic incorporates the grains of truth contained in “nature knows best” attitudes while providing criteria for the special cases where we have reason to believe that it is feasible for us to improve on nature. (shrink)
The Doomsday argument purports to show that the risk of the human species going extinct soon has been systematically underestimated. This argument has something in common with controversial forms of reasoning in other areas, including: game theoretic problems with imperfect recall, the methodology of cosmology, the epistomology of indexical belief, and the debate over so-called fine-tuning arguments for the design hypothesis. The common denominator is a certain premiss: the Self-Sampling Assumption. We present two strands of argument in favor of this (...) assumption. Through a series of throught experiments we then investigate some bizarre _prima facie_ consequences - backward causation, psychic powers, and an apparent conflict with the Principal Principle. (shrink)
Evolutionary development is sometimes thought of as exhibiting an inexorable trend towards higher, more complex, and normatively worthwhile forms of life. This paper explores some dystopian scenarios where freewheeling evolutionary developments, while continuing to produce complex and intelligent forms of organization, lead to the gradual elimination of all forms of being that we care about. We then consider how such catastrophic outcomes could be avoided and argue that under certain conditions the only possible remedy would be a globally coordinated policy (...) to control human evolution by modifying the fitness function of future intelligent life forms. (shrink)
Humans will not always be the most intelligent agents on Earth, the ones steering the future. What will happen to us when we no longer play that role, and how can we prepare for this transition?
Cognitive enhancements in the context of converging technologies. [Annals of the New York Academy of Sciences, Vol. 1093, pp. 201-207] [with Anders Sandberg] [pdf].
The future of humanity is often viewed as a topic for idle speculation. Yet our beliefs and assumptions on this subject matter shape decisions in both our personal lives and public policy – decisions that have very real and sometimes unfortunate consequences. It is therefore practically important to try to develop a realistic mode of futuristic thought about big picture questions for humanity. This paper sketches an overview of some recent attempts in this direction, and it offers a brief discussion (...) of four families of scenarios for humanity’s future: extinction, recurrent collapse, plateau, and posthumanity. (shrink)
Current cosmological theories say that the world is so big that all possible observations are in fact made. But then, how can such theories be tested? What could count as negative evidence? To answer that, we need to consider observation selection effects.
In some situations a number of agents each have the ability to undertake an initiative that would have significant effects on the others. Suppose that each of these agents is purely motivated by an altruistic concern for the common good. We show that if each agent acts on her own personal judgment as to whether the initiative should be undertaken, then the initiative will be undertaken more often than is optimal. We suggest that this phenomenon, which we call the unilateralist’s (...) curse, arises in many contexts, including some that are important for public policy. To lift the curse, we propose a principle of conformity, which would discourage unilateralist action. We consider three different models for how this principle could be implemented, and respond to an objection that could be raised against it. (shrink)
Anthony Brueckner, in a recent article, proffers ‘a new way of thinking about Bostrom's Simulation Argument’ . His comments, however, misconstrue the argument; and some words of explanation are in order.The Simulation Argument purports to show, given some plausible assumptions, that at least one of three propositions is true . Roughly stated, these propositions are: almost all civilizations at our current level of development go extinct before reaching technological maturity; there is a strong convergence among technologically mature civilizations such that (...) almost all of them lose interest in creating ancestor-simulations; almost all people with our sorts of experiences live in computer simulations. I also argue that conditional on you should assign a very high credence to the proposition that you live in a computer simulation. However, pace Brueckner, I do not argue that we should believe that we are in simulation. 1 In fact, I believe that we are probably not simulated. The Simulation Argument purports to show only that, as well as , at least one of – is true; but it does not tell us which one.Brueckner also writes: " It is worth noting that one reason why Bostrom thinks that the number of Sims [computer-generated minds with experiences similar to those typical of normal, embodied humans living in a Sim-free early 21 st century world] will vastly outstrip the number of humans is that Sims ‘will run their … ". (shrink)
[This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or considered science (...) fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts. Overall, the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-2050 and move on to superintelligence in less than 30 years thereafter. The experts say the probability is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
I reply to some recent comments by Brian Weatherson on my 'simulation argument'. I clarify some interpretational matters, and address issues relating to epistemological externalism, the difference from traditional brain-in-a-vat arguments, and a challenge based on 'grue'-like predicates.