The limits of probability modelling: A serendipitous tale of goldfish, transfinite numbers, and pieces of string [Book Review]

Mind and Society 1 (2):17-38 (2000)
  Copy   BIBTEX

Abstract

This paper is about the differences between probabilities and beliefs and why reasoning should not always conform to probability laws. Probability is defined in terms of urn models from which probability laws can be derived. This means that probabilities are expressed in rational numbers, they suppose the existence of veridical representations and, when viewed as parts of a probability model, they are determined by a restricted set of variables. Moreover, probabilities are subjective, in that they apply to classes of events that have been deemed (by someone) to be equivalent, rather than to unique events. Beliefs on the other hand are multifaceted, interconnected with all other beliefs, and inexpressible in their entirety. It will be argued that there are not sufficient rational numbers to characterise beliefs by probabilities and that the idea of a veridical set of beliefs is questionable. The concept of a complete probability model based on Fisher's notion of identifiable subsets is outlined. It is argued that to be complete a model must be known to be true. This can never be the case because whatever a person supposes to be true must be potentially modifiable in the light of new information. Thus to infer that an individual's probability estimate is biased it is necessary not only to show that the estimate differs from that given by a probability model, but also to assume that this model is complete, and completeness is not empirically verifiable. It follows that probability models and Bayes theorem are not necessarily appropriate standards for people's probability judgements. The quality of a probability model depends on how reasonable it is to treat some existing uncertainty as if it were equivalent to that in a particular urn model and this cannot be determined empirically. Bias can be demonstrated in estimates of proportions of finite populations such as in the false consensus effect. However the modification of beliefs by ad hoc methods like Tversky and Kahneman's heuristics can be justified, even though this results in biased judgements. This is because of pragmatic factors such as the cost of obtaining and taking account of additional information which are not included even in a complete probability model. Finally, an analogy is drawn between probability models and geometric figures. Both idealisations are useful but qualitatively inadequate characterisations of nature. A difference between the two is that the size of any error can be limited in the case of the geometric figure in a way that is not possible in a probability model

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,423

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2010-08-10

Downloads
29 (#538,668)

6 months
7 (#418,426)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

The logic of scientific discovery.Karl Raimund Popper - 1934 - New York: Routledge. Edited by Hutchinson Publishing Group.
Scientific reasoning: the Bayesian approach.Peter Urbach & Colin Howson - 1993 - Chicago: Open Court. Edited by Peter Urbach.
A treatise on probability.John Maynard Keynes - 1921 - Mineola, N.Y.: Dover Publications.
The Logic of Scientific Discovery.K. Popper - 1959 - British Journal for the Philosophy of Science 10 (37):55-57.

View all 30 references / Add more references