Abstract
The potential long-term benefits and risks of technological progress in artificial intelligence and related fields are sub-stantial. The risks include total human extinction as a result of unfriendly superintelligent AI, while the benefits include the liberation of human existence from death and suffering through mind uploading. One approach to mitigating the risk would be to engineer ethical principles into AI devices. However, this may not be possible, due to the nature of ethical agency. Even if it is possible, these principles, extrapolated to logical conclusions, may not favour human survival.