Moral Machines and the Threat of Ethical Nihilism
Abstract
In his famous 1950 paper where he presents what became the benchmark for success in artificial intelligence, Turing notes that "at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted" (Turing 1950, 442). Kurzweil (1990) suggests that Turing's prediction was correct, even if no machine has yet to pass the Turing Test. In the wake of the computer revolution, research in artificial intelligence and cognitive science has pushed in the direction of interpreting "thinking" as some sort of computational process. On this understanding, thinking is something computers (in principle) and humans (in practice) can both do. It is difficult to say precisely when in history the meaning of the term "thinking" headed in this direction. Signs are already present in the mechanistic and mathematical tendencies of the early Modern period, and maybe even glimmers are apparent in the ancient Greeks themselves. But over the long haul, we somehow now consider "thinking" as separate from the categories of "thoughtfulness" (in the general sense of wondering about things), "insight" and "wisdom." Intelligent machines are all around us, and the world is populated with smart cars, smart phones and even smart (robotic) appliances. But, though my cell phone might be smart, I do not take that to mean that it is thoughtful, insightful or wise. So, what has become of these latter categories? They seem to be bygones left behind by scientific and computational conceptions of thinking and knowledge that no longer have much use for them. In 2000, Allen, Varner and Zinser addressed the possibility of a Moral Turing Test (MTT) to judge the success of an automated moral agent (AMA), a theme that is repeated in Wallach and Allen (2009)..