Issues in robot ethics seen through the lens of a moral Turing test

Journal of Information, Communication and Ethics in Society 13 (2):98-109 (2015)
  Copy   BIBTEX

Abstract

Purpose – The purpose of this paper is to explore artificial moral agency by reflecting upon the possibility of a Moral Turing Test and whether its lack of focus on interiority, i.e. its behaviouristic foundation, counts as an obstacle to establishing such a test to judge the performance of an Artificial Moral Agent. Subsequently, to investigate whether an MTT could serve as a useful framework for the understanding, designing and engineering of AMAs, we set out to address fundamental challenges within the field of robot ethics regarding the formal representation of moral theories and standards. Here, typically three design approaches to AMAs are available: top-down theory-driven models and bottom-up approaches which set out to model moral behaviour by means of models for adaptive learning, such as neural networks, and finally, hybrid models, which involve components from both top-down and bottom-up approaches to the modelling of moral agency. With inspiration from Allen and Wallace as well as Prior, we elaborate on theoretically driven approaches to machine ethics by introducing deontic tense logic. Finally, within this framework, we explore the character of human interaction with a robot which has successfully passed an MTT. Design/methodology/approach – The ideas in this paper reflect preliminary theoretical considerations regarding the possibility of establishing a MTT based on the evaluation of moral behaviour, which focusses on moral reasoning regarding possible actions. The thoughts reflected fall within the field of normative ethics and apply deontic tense logic to discuss the possibilities and limitations of artificial moral agency. Findings – The authors stipulate a formalisation of logic of obligation, time and modality, which may serve as a candidate for implementing a system corresponding to an MTT in a restricted sense. Hence, the authors argue that to establish a present moral obligation, we need to be able to make a description of the actual situation and the relevant general moral rules. Such a description can never be complete, as the combination of exhaustive knowledge about both situations and rules would involve a God eye’s view, enabling one to know all there is to know and take everything relevant into consideration before making a perfect moral decision to act upon. Consequently, due to this frame problem, from an engineering point of view, we can only strive for designing a robot supposed to operate within a restricted domain and within a limited space-time region. Given such a setup, the robot has to be able to perform moral reasoning based on a formal description of the situation and any possible future developments. Although a system of this kind may be useful, it is clearly also limited to a particular context. It seems that it will always be possible to find special cases in which a given system does not pass the MTT. This calls for a new design of moral systems with trust-related components which will make it possible for the system to learn from experience. Originality/value – It is without doubt that in the near future we are going to be faced with advanced social robots with increasing autonomy, and our growing engagement with these robots calls for the exploration of ethical issues and stresses the importance of informing the process of engineering ethical robots. Our contribution can be seen as an early step in this direction.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,853

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

The Turing test.B. Jack Copeland - 2000 - Minds and Machines 10 (4):519-539.
When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
A study of self-awareness in robots.Toshiyuki Takiguchi, Atsushi Mizunaga & Junichi Takeno - 2013 - International Journal of Machine Consciousness 5 (2):145-164.
The status and future of the Turing test.James H. Moor - 2001 - Minds and Machines 11 (1):77-93.
A simple comment regarding the Turing test.Benny Shanon - 1989 - Journal for the Theory of Social Behaviour 19 (June):249-56.
The Turing triage test.Robert Sparrow - 2004 - Ethics and Information Technology 6 (4):203-213.
Turing test: 50 years later.Ayse Pinar Saygin, Ilyas Cicekli & Varol Akman - 2000 - Minds and Machines 10 (4):463-518.

Analytics

Added to PP
2015-09-02

Downloads
92 (#186,026)

6 months
18 (#141,285)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Anne Gerdes
University of Southern Denmark
Peter Øhrstrøm
Aalborg University

References found in this work

Minds, brains, and programs.John Searle - 1980 - Behavioral and Brain Sciences 3 (3):417-57.
Computing machinery and intelligence.Alan M. Turing - 1950 - Mind 59 (October):433-60.
Prolegomena to any future artificial moral agent.Colin Allen & Gary Varner - 2000 - Journal of Experimental and Theoretical Artificial Intelligence 12 (3):251--261.

View all 11 references / Add more references