Switch to: References

Add citations

You must login to add citations.
  1. Branching Is Not a Bug; It’s a Feature: Personal Identity and Legal (and Moral) Responsibility.Mark Walker - 2020 - Philosophy and Technology 33 (2):173-190.
    Prospective developments in computer and nanotechnology suggest that there is some possibility—perhaps as early as this century—that we will have the technological means to attempt to duplicate people. For example, it has been speculated that the psychology of individuals might be emulated on a computer platform to create a personality duplicate—an “upload.” Physical duplicates might be created by advanced nanobots tasked with creating molecule-for-molecule copies of individuals. Such possibilities are discussed in the philosophical literature as (putative) cases of “fission”: one (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • When should two minds be considered versions of one another?Ben Goertzel - 2012 - International Journal of Machine Consciousness 4 (01):177-185.
  • El volcado de la mente en la máquina y el problema de la identidad personal.Antonio Diéguez - 2022 - Revista de Filosofía (La Plata) 52 (2):e054.
    En este trabajo se analiza la cuestión de si el volcado de la mente en una máquina (mind uploading), en caso de ser alguna vez tecnológicamente posible, mantendría o destruiría la identidad personal de quien experimentara el volcado. Se verá cómo podría contestarse a la cuestión en función de los criterios para el mantenimiento de la identidad personal que se asuman. No hay una respuesta única, puesto que la identidad personal se mantendría o no en función de los supuestos aceptados. (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to (...)
    Direct download  
     
    Export citation  
     
    Bookmark   12 citations