The Outcome‐Representation Learning Model: A Novel Reinforcement Learning Model of the Iowa Gambling Task

Cognitive Science 42 (8):2534-2561 (2018)
  Copy   BIBTEX

Abstract

The Iowa Gambling Task (IGT) is widely used to study decision‐making within healthy and psychiatric populations. However, the complexity of the IGT makes it difficult to attribute variation in performance to specific cognitive processes. Several cognitive models have been proposed for the IGT in an effort to address this problem, but currently no single model shows optimal performance for both short‐ and long‐term prediction accuracy and parameter recovery. Here, we propose the Outcome‐Representation Learning (ORL) model, a novel model that provides the best compromise between competing models. We test the performance of the ORL model on 393 subjects' data collected across multiple research sites, and we show that the ORL reveals distinct patterns of decision‐making in substance‐using populations. Our work highlights the importance of using multiple model comparison metrics to make valid inference with cognitive models and sheds light on learning mechanisms that play a role in underweighting of rare events.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,991

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2018-10-05

Downloads
80 (#213,753)

6 months
66 (#77,223)

Historical graph of downloads
How can I increase my downloads?