There are various ways to reach a group decision on a factual yes–no question. One way is to vote and decide what the majority votes for. This procedure receives some epistemological support from the Condorcet Jury Theorem. Alternatively, the group members may prefer to deliberate and will eventually reach a decision that everybody endorses—a consensus. While the latter procedure has the advantage that it makes everybody happy, it has the disadvantage that it is difficult to implement, especially for larger groups. (...) Besides, the resulting consensus may be far away from the truth. And so we ask: Is deliberation truth-conducive in the sense that majority voting is? To address this question, we construct a highly idealized model of a particular deliberation process, inspired by the movie Twelve Angry Men, and show that the answer is ‘yes’. Deliberation procedures can be truth-conducive just as the voting procedure is. We then explore, again on the basis of our model and using agent-based simulations, under which conditions it is better epistemically to deliberate than to vote. Our analysis shows that there are contexts in which deliberation is epistemically preferable and we will provide reasons for why this is so. (shrink)
In this paper we investigate some mathematical consequences of the Equivocation Principle, and the Maximum Entropy models arising from that, for first order languages. We study the existence of Maximum Entropy models for these theories in terms of the quantifier complexity of the theory and will investigate some invariance and structural properties of such models.
We propose a new model for forming and revising beliefs about unknown probabilities. To go beyond what is known with certainty and represent the agent’s beliefs about probability, we consider a plausibility map, associating to each possible distribution a plausibility ranking. Beliefs are defined as in Belief Revision Theory, in terms of truth in the most plausible worlds. We consider two forms of conditioning or belief update, corresponding to the acquisition of two types of information: learning observable evidence obtained by (...) repeated sampling from the unknown distribution; and learning higher-order information about the distribution. The first changes only the plausibility map, but leaves the given set of possible distributions essentially unchanged; the second rules out some distributions, thus shrinking the set of possibilities, without changing their plausibility ordering.. We look at stability of beliefs under either of these types of learning, defining two related notions, as well as a measure of the verisimilitude of a given plausibility model. We prove a number of convergence results, showing how our agent’s beliefs track the true probability after repeated sampling, and how she eventually gains in a sense knowledge of that true probability. Finally, we sketch the contours of a dynamic doxastic logic for statistical learning. (shrink)