When, What, and How Much to Reward in Reinforcement Learning-Based Models of Cognition

Cognitive Science 36 (2):333-358 (2012)
  Copy   BIBTEX

Abstract

Reinforcement learning approaches to cognitive modeling represent task acquisition as learning to choose the sequence of steps that accomplishes the task while maximizing a reward. However, an apparently unrecognized problem for modelers is choosing when, what, and how much to reward; that is, when (the moment: end of trial, subtask, or some other interval of task performance), what (the objective function: e.g., performance time or performance accuracy), and how much (the magnitude: with binary, categorical, or continuous values). In this article, we explore the problem space of these three parameters in the context of a task whose completion entails some combination of 36 state–action pairs, where all intermediate states (i.e., after the initial state and prior to the end state) represent progressive but partial completion of the task. Different choices produce profoundly different learning paths and outcomes, with the strongest effect for moment. Unfortunately, there is little discussion in the literature of the effect of such choices. This absence is disappointing, as the choice of when, what, and how much needs to be made by a modeler for every learning model

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,322

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

The Evolution of Vagueness.Cailin O'Connor - 2013 - Erkenntnis (S4):1-21.
Reward: Wanted – a better definition.Irving Kupfermann - 2000 - Behavioral and Brain Sciences 23 (2):208-208.

Analytics

Added to PP
2012-01-19

Downloads
170 (#110,301)

6 months
4 (#797,377)

Historical graph of downloads
How can I increase my downloads?