In Russell Blackford & Damien Broderick (eds.),
Intelligence Unbound. Wiley. pp. 61–89 (
2014-08-11)
Copy
BIBTEX
Abstract
This chapter discusses nine ways to bias open‐source artificial general intelligence (AGI) toward friendliness. There is no way to guarantee that advanced AGI, once created and released into the world, will behave according to human ethical standards. The primary objective of the chapter is to suggest some potential ways to do so. First it discusses an engineer the capability to acquire integrated ethical knowledge, and provides rich ethical interaction and instruction, respecting developmental stages. The chapter creates stable, hierarchy‐dominated goal systems, and ensures that the early stages of recursive self‐improvement occur relatively slowly and with rich human involvement. It tightly links AGI with the Global Brain, and focuses on foster deep, consensus‐building interactions and commensurability between divergent viewpoints. The chapter creates a mutually supportive community of AGIs, and encourages measured co‐advancement of AGI software and AGI ethics theory. Finally it develops advanced AGI sooner.