Preferential attachment and the search for successful theories

Philosophy of Science 80 (5):769-782 (2013)
  Copy   BIBTEX

Abstract

Multiarm bandit problems have been used to model the selection of competing scientific theories by boundedly rational agents. In this paper, I define a variable-arm bandit problem, which allows the set of scientific theories to vary over time. I show that Roth-Erev reinforcement learning, which solves multiarm bandit problems in the limit, cannot solve this problem in a reasonable time. However, social learning via preferential attachment combined with individual reinforcement learning which discounts the past, does.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,098

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2013-10-19

Downloads
149 (#130,397)

6 months
32 (#106,387)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Jason Alexander
London School of Economics

References found in this work

Inventing new signals.Jason McKenzie Alexander, Brian Skyrms & Sandy L. Zabell - 2012 - Dynamic Games and Applications 2 (1):129-145.
Learning to Signal in a Dynamic World.J. McKenzie Alexander - 2014 - British Journal for the Philosophy of Science 65 (4):797-820.

Add more references