Instilling moral value alignment by means of multi-objective reinforcement learning

Ethics and Information Technology 24 (9) (2022)
  Copy   BIBTEX

Abstract

AI research is being challenged with ensuring that autonomous agents learn to behave ethically, namely in alignment with moral values. Here, we propose a novel way of tackling the value alignment problem as a two-step process. The first step consists on formalising moral values and value aligned behaviour based on philosophical foundations. Our formalisation is compatible with the framework of (Multi-Objective) Reinforcement Learning, to ease the handling of an agent’s individual and ethical objectives. The second step consists in designing an environment wherein an agent learns to behave ethically while pursuing its individual objective. We leverage on our theoretical results to introduce an algorithm that automates our two-step approach. In the cases where value-aligned behaviour is possible, our algorithm produces a learning environment for the agent wherein it will learn a value-aligned behaviour.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,369

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Qdsega による多足ロボットの歩行運動の獲得.Matsuno Fumitoshi Ito Kazuyuki - 2002 - Transactions of the Japanese Society for Artificial Intelligence 17:363-372.
Machine Learning, Functions and Goals.Patrick Butlin - 2022 - Croatian Journal of Philosophy 22 (66):351-370.

Analytics

Added to PP
2023-09-28

Downloads
6 (#1,467,118)

6 months
6 (#531,083)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references