Time to absorption in discounted reinforcement models

Abstract

Reinforcement schemes are a class of non-Markovian stochastic processes. Their non-Markovian nature allows them to model some kind of memory of the past. One subclass of such models are those in which the past is exponentially discounted or forgotten. Often, models in this subclass have the property of becoming trapped with probability 1 in some degenerate state. While previous work has concentrated on such limit results, we concentrate here on a contrary effect, namely that the time to become trapped may increase exponentially in 1/x as the discount rate, 1− x, approaches 1. As a result, the time to become trapped may easily exceed the lifetime of the simulation or of the physical data being modeled. In such a case, the quasi-stationary behavior is more germane. We apply our results to a model of social network formation based on ternary (three-person) interactions with uniform positive reinforcement.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,475

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Analytics

Added to PP
2009-12-05

Downloads
18 (#824,856)

6 months
4 (#783,550)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Brian Skyrms
University of California, Irvine

References found in this work

No references found.

Add more references