This chapter is a philosophical survey of some leading approaches in formal epistemology in the so-called ‘Bayesian’ tradition. According to them, a rational agent’s degrees of belief—credences—at a time are representable with probability functions. We also canvas various further putative ‘synchronic’ rationality norms on credences. We then consider ‘diachronic’ norms that are thought to constrain how credences should respond to evidence. We discuss some of the main lines of recent debate, and conclude with some prospects for future research.
This book offers a concise survey of basic probability theory from a thoroughly subjective point of view whereby probability is a mode of judgment. Written by one of the greatest figures in the field of probability theory, the book is both a summation and synthesis of a lifetime of wrestling with these problems and issues. After an introduction to basic probability theory, there are chapters on scientific hypothesis-testing, on changing your mind in response to generally (...) uncertain observations, on expectations of the values of random variables, on de Finetti's dissolution of the so-called problem of induction, and on decision theory. (shrink)
The aim of this paper is to present and analyse Bruno de Finetti's view that the axiom of countable additivity of the probability calculus cannot be justified in terms of the subjective interpretation of probability. After presenting the core of the subjective theory of probability and the main de Finetti's argument against the axiom of countable additivity (the so called de Finetti's infinite lottery) I argue against de Finetti's view. In particular, I claim that de (...) Finetti does not prove the impossibility of using Dutch Book argument for the axiom of countable additivity. Consequently, we can use Dutch Book argument for the justification of the axiom of countable additivity and regard de Finetti's lottery as a special case when the axiom does not hold, or we can justify countable additivity by Dutch Book argument and reject de Finetti's lottery as irrational. The second strategy, represented especially by Jon Williamson, is much more compatible with the idea of subjective interpretation of probability. (shrink)
Many have claimed that unspecific evidence sometimes demands unsharp, indeterminate, imprecise, vague, or interval-valued probabilities. Against this, a variant of the diachronic Dutch Book argument shows that perfectly rational agents always have perfectly sharp probabilities.
Subjectiveprobability plays an increasingly important role in many fields concerned with human cognition and behavior. Yet there have been significant criticisms of the idea that probabilities could actually be represented in the mind. This paper presents and elaborates a view of subjectiveprobability as a kind of sampling propensity associated with internally represented generative models. The resulting view answers to some of the most well known criticisms of subjectiveprobability, and is also supported (...) by empirical work in neuroscience and behavioral psychology. The repercussions of the view for how we conceive of many ordinary instances of subjectiveprobability, and how it relates to more traditional conceptions of subjectiveprobability, are discussed in some detail. (shrink)
There has been much recent interest in imprecise probabilities, models of belief that allow unsharp or fuzzy credence. There have also been some influential criticisms of this position. Here we argue, chiefly against Elga, that subjective probabilities need not be sharp. The key question is whether the imprecise probabilist can make reasonable sequences of decisions. We argue that she can. We outline Elga's argument and clarify the assumptions he makes and the principles of rationality he is implicitly committed to. (...) We argue that these assumptions are too strong and that rational imprecise choice is possible in the absence of these overly strong conditions. (shrink)
Numerous studies have convincingly shown that prospect theory can better describe risky choice behavior than the classical expected utility model because it makes the plausible assumption that risk aversion is driven not only by the degree of sensitivity toward outcomes, but also by the degree of sensitivity toward probabilities. This article presents the results of an experiment aimed at testing whether agents become more sensitive toward probabilities over time when they repeatedly face similar decisions, receive feedback on the consequences of (...) their decisions, and are given ample incentives to reflect on their decisions, as predicted by Plott’s Discovered Preference Hypothesis (DPH). The results of a laboratory experiment with N = 62 participants support this hypothesis. The elicited subjectiveprobability weighting function converges significantly toward linearity when respondents are asked to make repeated choices and are given direct feedback after each choice. Such convergence to linearity is absent in an experimental treatment where respondents are asked to make repeated choices but do not experience the resolution of risk directly after each choice, as predicted by the DPH. (shrink)
It is well known that classical, aka ‘sharp’, Bayesian decision theory, which models belief states as single probability functions, faces a number of serious difficulties with respect to its handling of agnosticism. These difficulties have led to the increasing popularity of so-called ‘imprecise’ models of decision-making, which represent belief states as sets of probability functions. In a recent paper, however, Adam Elga has argued in favour of a putative normative principle of sequential choice that he claims to be (...) borne out by the sharp model but not by any promising incarnation of its imprecise counterpart. After first pointing out that Elga has fallen short of establishing that his principle is indeed uniquely borne out by the sharp model, I cast aspersions on its plausibility. I show that a slight weakening of the principle is satisfied by at least one, but interestingly not all, varieties of the imprecise model and point out that Elga has failed to motivate his stronger commitment. (shrink)
On an attractive, naturalistically respectable theory of intentionality, mental contents are a form of measurement system for representing behavioral and psychological dispositions. This chapter argues that a consequence of this view is that the content/attitude distinction is measurement system relative. As a result, there is substantial arbitrariness in the content/attitude distinction. Whether some measurement of mental states counts as characterizing the content of mental states or the attitude is not a question of empirical discovery but of theoretical utility. If correct, (...) this observation has ramifications in the theory of rationality. Some epistemologists and decision theorists have argued that imprecise credences are rationally impermissible, while others have argued that precise credences are rationally impermissible. If the measure theory of mental content is correct, however, then neither imprecise credences nor precise credences can be rationally impermissible. (shrink)
Bayesianism is the position that scientific reasoning is probabilistic and that probabilities are adequately interpreted as an agent's actual subjective degrees of belief, measured by her betting behaviour. Confirmation is one important aspect of scientific reasoning. The thesis of this paper is the following: if scientific reasoning is at all probabilistic, the subjective interpretation has to be given up in order to get right confirmation—and thus scientific reasoning in general. The Bayesian approach to scientific reasoning Bayesian confirmation theory (...) The example The less reliable the source of information, the higher the degree of Bayesian confirmation Measure sensitivity A more general version of the problem of old evidence Conditioning on the entailment relation The counterfactual strategy Generalizing the counterfactual strategy The desired result, and a necessary and sufficient condition for it Actual degrees of belief The common knock-down feature, or ‘anything goes’ The problem of prior probabilities. (shrink)
In the Bayesian approach to quantum mechanics, probabilities—and thus quantum states—represent an agent’s degrees of belief, rather than corresponding to objective properties of physical systems. In this paper we investigate the concept of certainty in quantum mechanics. Particularly, we show how the probability-1 predictions derived from pure quantum states highlight a fundamental difference between our Bayesian approach, on the one hand, and Copenhagen and similar interpretations on the other. We first review the main arguments for the general claim that (...) probabilities always represent degrees of belief. We then argue that a quantum state prepared by some physical device always depends on an agent’s prior beliefs, implying that the probability-1 predictions derived from that state also depend on the agent’s prior beliefs. Quantum certainty is therefore always some agent’s certainty. Conversely, if facts about an experimental setup could imply agent-independent certainty for a measurement outcome, as in many Copenhagen-like interpretations, that outcome would effectively correspond to a preexisting system property. The idea that measurement outcomes occurring with certainty correspond to preexisting system properties is, however, in conflict with locality. We emphasize this by giving a version of an argument of Stairs [(1983). Quantum logic, realism, and value-definiteness. Philosophy of Science, 50, 578], which applies the Kochen–Specker theorem to an entangled bipartite system. (shrink)
This paper addresses the problem of why the conditions under which standard proofs of the Dutch Book argument proceed should ever be met. In particular, the condition that there should be odds at which you would be willing to bet indifferently for or against are hardly plausible in practice, and relaxing it and applying Dutch book considerations gives only the theory of upper and lower probabilities. It is argued that there are nevertheless admittedly rather idealised circumstances in which the classic (...) form of the Dutch Book argument is valid. (shrink)
Largely due to the difficulty of observing behavior, empirical business ethics research relies heavily on the scenario methodology. While not disputing the usefulness of the technique, this paper highlights the importance of a careful assessment of the fit between the context of the situation described in the scenario and the knowledge and experience of the respondents. Based on a study of online auctions, we provide evidence that even respondents who have direct knowledge of the situation portrayed in the scenario may (...) develop significantly different assessments of the level of unethical behavior. Further, those assessments may be conditioned in different ways by the same moderating variables. We conclude that care should be exercised when recruiting respondents to choose only those who can be expected to understand the scenario in its true context and that separate analyses should be conducted for groups of respondents who have different perspectives within that context. (shrink)
Truth and probability; Foresight: its logical laws, its subjective sources; The bases of probability; Subjectiveprobability as the measure of a non-measurable set; The elicitation of personal probabilities; Probability: beware of falsifications; Probable knowledge.
This study uses the television show Cash Cab as a natural experiment to investigate gender differences in decision making under uncertainty. As expected, men are much more likely to accept the end-of-game gamble than are women, but men and women appear to weigh performance variables differently when relying on subjective probabilities. At best men base their risky decisions on general aspects of their previous “good” play (not all of which is relevant at the time the decision is made) and (...) at worst fail to condition their risky decisions on any of the relevant information available to them. In sharp contrast, women appear to consider all of the information available to them, including previous “poor” play as well as their most recent confident “good” play, which, by design, is likely the most relevant information to consider. (shrink)
In this collection the authors have attempted to bring together a number of the essential papers in the subjective interpretation of probability theory; several of them—Borel's "Apropos of a theory on probability" and de Finetti's "Foresight: its logical laws, its subjective sources"—have never appeared before in English. Other articles include Venn's pioneering study as well as the more recent work of Ramsey, Koopman, and Savage. The editors provide an introduction which presents the three basic elements of (...) any subjectivistic theory: probability as degree of belief, the coherence of beliefs of an individual, and the notion of exchangeable events. A bibliography includes references to virtually all the more important works in subjectiveprobability, with special emphasis on the development of the mathematical side of the theory. This anthology belongs on the shelf of any philosopher concerned with inductive logic, statistical inference, or the foundations of probability theory.—P. J. M. (shrink)
In this paper I address the question of whether the probabilities that appear in models of stochastic gene expression are objective or subjective. I argue that while our best models of the phenomena in question are stochastic models, this fact should not lead us to automatically assume that the processes are inherently stochastic. After distinguishing between models and reality, I give a brief introduction to the philosophical problem of the interpretation of probability statements. I argue that the objective (...) vs. subjective distinction is a false dichotomy and is an unhelpful distinction in this case. Instead, the probabilities in our models of gene expression exhibit standard features of both objectivity and subjectivity. (shrink)
Subjectiveprobability considered as a logic of partial belief succumbs to three fundamental fallacies. These concern the representation of preference via expectation, the measurability of partial belief, and the normalization of belief.
According to reliabilist conceptions of knowledge, knowledge implies reliable true belief. Since reliability is an irreducibly probabilistic notion, one's view of knowledge also depends on one's view of probability. If one believes that all probability is subjectiveprobability, knowledge becomes a relativized concept: knowledge is relative to a given body of beliefs of a given person at a given time. Since such a relativized conception of knowledge is extremely implausible and since reliabilism seems to capture at (...) least part of the truth, one should rather give up a purely subjective view of probability. -/- . (shrink)