Abstract
In this paper, I develop a new kind of conciliatory answer to the problem of peer disagreement. Instead of trying to guide an agent’s updating behaviour in any particular disagreement, I establish constraints on an agent’s expected behaviour and argue that, in the long run, she should tend to be conciliatory toward her peers. I first claim that this macro-approach affords us new conceptual insight on the problem of peer disagreement and provides an important angle complementary to the standard micro-approaches in the literature. I then detail the import of two novel results based on accuracy-considerations that establish the following: An agent should, on average, give her peers equal weight. However, if the agent takes herself and her advisor to be reliable, she should usually give the party with a stronger opinion more weight. In other words, an agent’s response to peer disagreement should over the course of many disagreements average out to equal weight, but in any particular disagreement, her response should tend to deviate from equal weight in a way that systematically depends on the actual credences she and her advisor report