Order:
  1. How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role.Carl Shulman & Elliott Thornley - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on longtermism. (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  2. The Evidentialist's Wager.William MacAskill, Aron Vallinder, Caspar Oesterheld, Carl Shulman & Johannes Treutlein - 2021 - Journal of Philosophy 118 (6):320-342.
    Suppose that an altruistic agent who is uncertain between evidential and causal decision theory finds herself in a situation where these theories give conflicting verdicts. We argue that even if she has significantly higher credence in CDT, she should nevertheless act in accordance with EDT. First, we claim that the appropriate response to normative uncertainty is to hedge one's bets. That is, if the stakes are much higher on one theory than another, and the credences you assign to each of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  3.  99
    Racing to the precipice: a model of artificial intelligence development.Stuart Armstrong, Nick Bostrom & Carl Shulman - 2016 - AI and Society 31 (2):201-206.
  4.  61
    How hard is artificial intelligence? Evolutionary arguments and selection effects.Carl Shulman & Nick Bostrom - 2012 - Journal of Consciousness Studies 19 (7-8):7-8.
    Several authors have made the argument that because blind evolutionary processes produced human intelligence on Earth, it should be feasible for clever human engineers to create human-level artificial intelligence in the not-too-distant future. This evolutionary argument, however, has ignored the observation selection effect that guarantees that observers will see intelligent life having arisen on their planet no matter how hard it is for intelligent life to evolve on any given Earth-like planet. We explore how the evolutionary argument might be salvaged (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  5. How hard is artificial intelligence? The evolutionary argument and observation selection effects.Carl Shulman & B. Nick - forthcoming - Journal of Consciousness Studies.