Switch to: References

Add citations

You must login to add citations.
  1. Risk management standards and the active management of malicious intent in artificial superintelligence.Patrick Bradley - 2020 - AI and Society 35 (2):319-328.
    The likely near future creation of artificial superintelligence carries significant risks to humanity. These risks are difficult to conceptualise and quantify, but malicious use of existing artificial intelligence by criminals and state actors is already occurring and poses risks to digital security, physical security and integrity of political systems. These risks will increase as artificial intelligence moves closer to superintelligence. While there is little research on risk management tools used in artificial intelligence development, the current global standard for risk management, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to (...)
    Direct download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Reframing Ethical Theory, Pedagogy, and Legislation to Bias Open Source AGI Towards Friendliness and Wisdom.John Gray Cox - 2015 - Journal of Evolution and Technology 25 (2):39-54.
    Hopes for biasing the odds towards the development of AGI that is human-friendly depend on finding and employing ethical theories and practices that can be incorporated successfully in the construction; programming and/or developmental growth; education and mature life world of future AGI. Mainstream ethical theories are ill-adapted for this purpose because of their mono-logical decision procedures which aim at “Golden rule” style principles and judgments which are objective in the sense of being universal and absolute. A much more helpful framework (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Superintelligence: Fears, Promises and Potentials.Ben Goertzel - 2015 - Journal of Evolution and Technology 25 (2):55-87.
    Oxford philosopher Nick Bostrom; in his recent and celebrated book Superintelligence; argues that advanced AI poses a potentially major existential risk to humanity; and that advanced AI development should be heavily regulated and perhaps even restricted to a small set of government-approved researchers. Bostrom’s ideas and arguments are reviewed and explored in detail; and compared with the thinking of three other current thinkers on the nature and implications of AI: Eliezer Yudkowsky of the Machine Intelligence Research Institute ; and David (...)
    No categories
     
    Export citation  
     
    Bookmark   4 citations  
  • Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Don't Worry about Superintelligence.Nicholas Agar - 2016 - Journal of Evolution and Technology 26 (1):73-82.
    This paper responds to Nick Bostrom’s suggestion that the threat of a human-unfriendly superintelligenceshould lead us to delay or rethink progress in AI. I allow that progress in AI presents problems that we are currently unable to solve. However; we should distinguish between currently unsolved problems for which there are rational expectations of solutions and currently unsolved problems for which no such expectation is appropriate. The problem of a human-unfriendly superintelligence belongs to the first category. It is rational to proceed (...)
    No categories
     
    Export citation  
     
    Bookmark   1 citation  
  • Infusing Advanced AGIs with Human-Like Value Systems: Two Theses.Ben Goertzel - 2016 - Journal of Evolution and Technology 26 (1):50-72.
    Two theses are proposed; regarding the future evolution of the value systems of advanced AGI systems. The Value Learning Thesis is a semi-formalized version of the idea that; if an AGI system is taught human values in an interactive and experiential way as its intelligence increases toward human level; it will likely adopt these human values in a genuine way. The Value Evolution Thesis is a semi-formalized version of the idea that if an AGI system begins with human-like values; and (...)
    No categories
     
    Export citation  
     
    Bookmark   3 citations