It is tempting to think that, if a person's beliefs are coherent, they are also likely to be true. This truth conduciveness claim is the cornerstone of the popular coherence theory of knowledge and justification. Erik Olsson's new book is the most extensive and detailed study of coherence and probable truth to date. Setting new standards of precision and clarity, Olsson argues that the value of coherence has been widely overestimated. Provocative and readable, Against Coherence will (...) make stimulating reading for epistemologists and anyone with a serious interest in truth. (shrink)
This paper aims to contribute to our understanding of the notion of coherence by explicating in probabilistic terms, step by step, what seem to be our most basic intuitions about that notion, to wit, that coherence is a matter of hanging or fitting together, and that coherence is a matter of degree. A qualitative theory of coherence will serve as a stepping stone to formulate a set of quantitative measures of coherence, each of which seems (...) to capture well the aforementioned intuitions. Subsequently it will be argued that one of those measures does better than the others in light of some more specific intuitions about coherence. This measure will be defended against two seemingly obvious objections. (shrink)
This target article presents a new computational theory of explanatory coherence that applies to the acceptance and rejection of scientific hypotheses as well as to reasoning in everyday life, The theory consists of seven principles that establish relations of local coherence between a hypothesis and other propositions. A hypothesis coheres with propositions that it explains, or that explain it, or that participate with it in explaining other propositions, or that offer analogous explanations. Propositions are incoherent with each other (...) if they are contradictory, Propositions that describe the results of observation have a degree of acceptability on their own. An explanatory hypothesis is acccpted if it coheres better overall than its competitors. The power of the seven principles is shown by their implementation in a connectionist program called ECHO, which treats hypothesis evaluation as a constraint satisfaction problem. Inputs about the explanatory relations are used to create a network of units representing propositions, while coherende and incoherence relations are encoded by excitatory and inbihitory links. ECHO provides an algorithm for smoothly integrating theory evaluation based on considerations of explanatory breadth, simplicity, and analogy. It has been applied to such important scientific cases as Lovoisier's argument for oxygen against the phlogiston theory and Darwin's argument for evolution against creationism, and also to cases of legal reasoning. The theory of explanatory coherence has implications for artificial intelligence, psychology, and philosophy. (shrink)
Taking Joyce’s (1998; 2009) recent argument(s) for probabilism as our point of departure, we propose a new way of grounding formal, synchronic, epistemic coherence requirements for (opinionated) full belief. Our approach yields principled alternatives to deductive consistency, sheds new light on the preface and lottery paradoxes, and reveals novel conceptual connections between alethic and evidential epistemic norms.
The coherence of independent reports provides a strong reason to believe that the reports are true. This plausible claim has come under attack from recent work in Bayesian epistemology. This work shows that, under certain probabilistic conditions, coherence cannot increase the probability of the target claim. These theorems are taken to demonstrate that epistemic coherentism is untenable. To date no one has investigated how these results bear on different conceptions of coherence. I investigate this situation using Thagard’s (...) ECHO model of explanatory coherence. Thagard’s ECHO model provides a natural representation of the evidential significance of multiple independent reports. (shrink)
In 2012, the Geological Time Scale, which sets the temporal framework for studying the timing and tempo of all major geological, biological, and climatic events in Earth’s history, had one-quarter of its boundaries moved in a widespread revision of radiometric dates. The philosophy of metrology helps us understand this episode, and it, in turn, elucidates the notions of calibration, coherence, and consilience. I argue that coherence testing is a distinct activity preceding calibration and consilience, and I highlight the (...) value of discordant evidence and trade-offs scientists face in calibration. The iterative nature of calibration, moreover, raises the problem of legacy data. (shrink)
Probabilistic coherence is not an absolute requirement of rationality; nevertheless, it is an ideal of rationality with substantive normative import. An idealized rational agent who avoided making implicit logical errors in forming his preferences would be coherent. In response to the challenge, recently made by epistemologists such as Foley and Plantinga, that appeals to ideal rationality render probabilism either irrelevant or implausible, I argue that idealized requirements can be normatively relevant even when the ideals are unattainable, so long as (...) they define a structure that links imperfect and perfect rationality in a way that enables us to make sense of the notion of better approximations to the ideal. I then analyze the notion of approximation to the ideal of coherence by developing a generalized theory of belief functions that allows for incoherence, and showing how such belief functions can be ordered with regard to greater or lesser coherence. (shrink)
There are at least two different aspects of our rational evaluation of agents’ doxastic attitudes. First, we evaluate these attitudes according to whether they are supported by one’s evidence (substantive rationality). Second, we evaluate these attitudes according to how well they cohere with one another (structural rationality). In previous work, I’ve argued that substantive and structural rationality really are distinct, sui generis, kinds of rationality – call this view ‘dualism’, as opposed to ‘monism’, about rationality – by arguing that the (...) requirements of substantive and structural rationality can come into conflict. In this paper, I push the dialectic on this issue forward in two main ways. First, I argue that the most promising ways of resisting the diagnosis of my cases as conflicts still end up undermining monism in different ways. Second, supposing for the sake of argument that we should understand the cases as conflicts, I address the question of what we should do when such conflicts arise. I argue that, at least in a prominent kind of conflict case, the coherence requirements take precedence over the evidential requirements. (shrink)
It is obvious that we would not want to demand that an agent' s beliefs at different times exhibit the same sort of consistency that we demand from an agent' s simultaneous beliefs; there' s nothing irrational about believing P at one time and not-P at another. Nevertheless, many have thought that some sort of coherence or stability of beliefs over time is an important component of epistemic rationality.
In this paper, we identify a new and mathematically well-defined sense in which the coherence of a set of hypotheses can be truth-conducive. Our focus is not, as usual, on the probability but on the confirmation of a coherent set and its members. We show that, if evidence confirms a hypothesis, confirmation is “transmitted” to any hypotheses that are sufficiently coherent with the former hypothesis, according to some appropriate probabilistic coherence measure such as Olsson’s or Fitelson’s measure. Our (...) findings have implications for scientific methodology, as they provide a formal rationale for the method of indirect confirmation and the method of confirming theories by confirming their parts. (shrink)
A measure of coherence is said to be reliability conducive if and only if a higher degree of coherence (as measured) results in a higher likelihood that the witnesses are reliable. Recently, it has been proved that several coherence measures proposed in the literature are reliability conducive in a restricted scenario (Olsson and Schubert 2007, Synthese 157:297–308). My aim is to investigate which coherence measures turn out to be reliability conducive in the more general scenario where (...) it is any finite number of witnesses that give equivalent reports. It is shown that only the so-called Shogenji measure is reliability conducive in this scenario. I take that to be an argument for the Shogenji measure being a fruitful explication of coherence. (shrink)
Putnam (1975) infers from the success of a scientific theory to its approximate truth and the reference of its key term. Laudan (1981) objects that some past theories were successful, and yet their key terms did not refer, so they were not even approximately true. Kitcher (1993) replies that the past theories are approximately true because their working posits are true, although their idle posits are false. In contrast, I argue that successful theories which cohere with each other are approximately (...) true, and that their key terms refer. My position is immune to Laudan’s counterexamples to Putnam’s inference and yields a solution to a problem with Kitcher’s position. (shrink)
The question of coherence of rules for changing degrees of belief in the light of new evidence is studied, with special attention being given to cases in which evidence is uncertain. Belief change by the rule of conditionalization on an appropriate proposition and belief change by "probability kinematics" on an appropriate partition are shown to have like status.
This paper examines how coherence of the contents of evidence affects the transmission of probabilistic support from the evidence to the hypothesis. It is argued that coherence of the contents in the sense of the ratio of the positive intersection reduces the transmission of probabilistic support, though this negative impact of coherence may be offset by other aspects of the relations among the contents. It is argued further that there is no broader conception of coherence whose (...) impact on the transmission of probabilistic support is never offset by other aspects of the relations among the contents. The paper also examines reasons for the contrary impression that coherence of the contents increases the transmission of probabilistic support, especially in the special case where the hypothesis to evaluate is the conjunction of the contents of evidence. (shrink)
A measure of coherence is said to be reliability conducive if and only if a higher degree of coherence (asmeasured) of a set of testimonies implies a higher probability that the witnesses are reliable. Recently, it has been proved that the Shogenji measure of coherence is reliability conducive in restricted scenarios (e.g., Olsson and Schubert, Synthese, 157:297–308, 2007). In this article, I investigate whether the Shogenji measure, or any other coherence measure, is reliability conducive in general. (...) An impossibility theorem is proved to the effect that this is not the case. I conclude that coherence is not reliability conducive. (shrink)
We discuss several features of coherent choice functions —where the admissible options in a decision problem are exactly those that maximize expected utility for some probability/utility pair in fixed set S of probability/utility pairs. In this paper we consider, primarily, normal form decision problems under uncertainty—where only the probability component of S is indeterminate and utility for two privileged outcomes is determinate. Coherent choice distinguishes between each pair of sets of probabilities regardless the “shape” or “connectedness” of the sets of (...) probabilities. We axiomatize the theory of choice functions and show these axioms are necessary for coherence. The axioms are sufficient for coherence using a set of probability/almost-state-independent utility pairs. We give sufficient conditions when a choice function satisfying our axioms is represented by a set of probability/state-independent utility pairs with a common utility. (shrink)
Seismic coherence is a routine measure of seismic reflection similarity for interpreters seeking structural boundary and discontinuity features that may be not properly highlighted on original amplitude volumes. One mostly wishes to use the broadest band seismic data for interpretation. However, because of thickness tuning effects, spectral components of specific frequencies can highlight features of certain thicknesses with higher signal-to-noise ratio than others. Seismic stratigraphic features may be buried in the full-bandwidth data, but can be “lit up” at certain (...) spectral components. For the same reason, coherence attributes computed from spectral voice components also often provide sharper images, with the “best” component being a function of the tuning thickness and the reflector alignment across faults. Although one can corender three coherence images using red-green-blue blending, a display of the information contained in more than three volumes in a single image is difficult. We address this problem by combining covariance matrices for each spectral component, adding them together, resulting in a “multispectral” coherence algorithm. The multispectral coherence images provide better images of channel incisement, and they are less noisy than those computed from the full bandwidth data. In addition, multispectral coherence also provides a significant advantage over RGB blended volumes. The information content from unlimited spectral voices can be combined into one volume, which is useful for a posteriori/further processing, such as color corendering display with other related attributes, such as petrophysics parameters plotted against a polychromatic color bar. We develop the value of multispectral coherence by comparing it with the RGB blended volumes and coherence computed from spectrally balanced, full-bandwidth seismic amplitude volume from a megamerge survey acquired over the Red Fork Formation of the Anadarko Basin, Oklahoma. (shrink)
We provide self-contained proof of a theorem relating probabilistic coherence of forecasts to their non-domination by rival forecasts with respect to any proper scoring rule. The theorem recapitulates insights achieved by other investigators, and clarifi es the connection of coherence and proper scoring rules to Bregman divergence.
Coherentism maintains that coherent beliefs are more likely to be true than incoherent beliefs, and that coherent evidence provides more confirmation of a hypothesis when the evidence is made coherent by the explanation provided by that hypothesis. Although probabilistic models of credence ought to be well-suited to justifying such claims, negative results from Bayesian epistemology have suggested otherwise. In this essay we argue that the connection between coherence and confirmation should be understood as a relation mediated by the causal (...) relationships among the evidence and a hypothesis, and we offer a framework for doing so by fitting together probabilistic models of coherence, confirmation, and causation. We show that the causal structure among the evidence and hypothesis is sometimes enough to determine whether the coherence of the evidence boosts confirmation of the hypothesis, makes no difference to it, or even reduces it. We also show that, ceteris paribus, it is not the coherence of the evidence that boosts confirmation, but rather the ratio of the coherence of the evidence to the coherence of the evidence conditional on a hypothesis. (shrink)
It is a widespread intuition that the coherence of independent reports provides a powerful reason to believe that the reports are true. Formal results by Huemer, M. 1997. “Probability and Coherence Justification.” Southern Journal of Philosophy 35: 463–72, Olsson, E. 2002. “What is the Problem of Coherence and Truth?” Journal of Philosophy XCIX : 246–72, Olsson, E. 2005. Against Coherence: Truth, Probability, and Justification. Oxford University Press., Bovens, L., and S. Hartmann. 2003. Bayesian Epistemology. Oxford University (...) Press, prove that, under certain conditions, coherence cannot increase the probability of the target claim. These formal results, known as ‘the impossibility theorems’ have been widely discussed in the literature. They are taken to have significant epistemic upshot. In particular, they are taken to show that reports must first individually confirm the target claim before the coherence of multiple reports offers any positive confirmation. In this paper, I dispute this epistemic interpretation. The impossibility theorems are consistent with the idea that the coherence of independent reports provides a powerful reason to believe that the reports are true even if the reports do not individually confirm prior to coherence. Once we see that the formal discoveries do not have this implication, we can recover a model of coherence justification consistent with Bayesianism and these results. This paper, thus, seeks to turn the tide of the negative findings for coherence reasoning by defending coherence as a unique source of confirmation. (shrink)
Evidence on the coherence between emotion and facial expression in adults from laboratory experiments is reviewed. High coherence has been found in several studies between amusement and smiling; low to moderate coherence between other positive emotions and smiling. The available evidence for surprise and disgust suggests that these emotions are accompanied by their “traditional” facial expressions, and even components of these expressions, only in a minority of cases. Evidence concerning sadness, anger, and fear is very limited. For (...) sadness, one study suggests that high emotion–expression coherence may exist in specific situations, whereas for anger and fear, the evidence points to low coherence. Insufficient emotion intensity and inhibition of facial expressions seem unable to account for the observed dissociations between emotion and facial expression. (shrink)
What is the relation between coherence and truth? This paper rejects numerous answers to this question, including the following: truth is coherence; coherence is irrelevant to truth; coherence always leads to truth; coherence leads to probability, which leads to truth. I will argue that coherence of the right kind leads to at least approximate truth. The right kind is explanatory coherence, where explanation consists in describing mechanisms. We can judge that a scientific theory (...) is progressively approximating the truth if it is increasing its explanatory coherence in two key respects: broadening by explaining more phenomena and deepening by investigating layers of mechanisms. I sketch an explanation of why deepening is a good epistemic strategy and discuss the prospect of deepening knowledge in the social sciences and everyday life. (shrink)
Striving for a probabilistic explication of coherence, scholars proposed a distinction between agreement and striking agreement. In this paper I argue that only the former should be considered a genuine concept of coherence. In a second step the relation between coherence and reliability is assessed. I show that it is possible to concur with common intuitions regarding the impact of coherence on reliability in various types of witness scenarios by means of an agreement measure of (...) class='Hi'>coherence. Highlighting the need to separate the impact of coherence and specificity on reliability it is finally shown that a recently proposed vindication of the Shogenji measure qua measure of coherence vanishes. (shrink)
Seismic coherence is commonly used to delineate structural and stratigraphic discontinuities. We generally use full-bandwidth seismic data to calculate coherence. However, some seismic stratigraphic features may be buried in this full-bandwidth data but can be highlighted by certain spectral components. Due to thin-bed tuning phenomena, discontinuities in a thicker stratigraphic feature may be tuned and thus better delineated at a lower frequency, whereas discontinuities in the thinner units may be tuned and thus better delineated at a higher frequency. (...) Additionally, whether due to the seismic data quality or underlying geology, certain spectral components exhibit higher quality over other components, resulting in correspondingly higher quality coherence images. Multispectral coherence provides an effective tool to exploit these observations. We have developed the performance of multispectral coherence using different spectral decomposition methods: the continuous wavelet transform, maximum entropy, amplitude volume technique, and spectral probe. Applications to a 3D seismic data volume indicate that multispectral coherence images are superior to full-bandwidth coherence, providing better delineation of incised channels with less noise. From the CWT experiments, we find that providing exponentially spaced CWT components provides better coherence images than equally spaced components for the same computation cost. The multispectral coherence image computed using maximum entropy spectral voices further improves the resolution of the thinner channels and small-scale features. The coherence from AVT data set provides continuous images of thicker channel boundaries but poor images of the small-scale features inside the thicker channels. Additionally, multispectral coherence computed using the nonlinear spectral probes exhibits more balanced and reveals clear small-scale geologic features inside the thicker channel. However, because amplitudes are not preserved in the nonlinear spectral probe decomposition, noise in the noisier shorter period components has an equal weight when building the covariance matrix, resulting in increased noise in the generated multispectral coherence images. (shrink)
Recent work on rationality has been increasingly attentive to “coherence requirements”, with heated debates about both the content of such requirements and their normative status (e.g., whether there is necessarily reason to comply with them). Yet there is little to no work on the metanormative status of coherence requirements. Metaphysically: what is it for two or more mental states to be jointly incoherent, such that they are banned by a coherence requirement? In virtue of what are some (...) putative requirements genuine and others not? Epistemologically: how are we to know which of the requirements are genuine and which aren’t? This paper tries to offer an account that answers these questions. On my account, the incoherence of a set of attitudinal mental states is a matter of its being (partially) constitutive of the mental states in question that, for any agent that holds these attitudes jointly, the agent is disposed, when conditions of full transparency are met, to give up at least one of the attitudes. (shrink)
The impossibility results of Bovens and Hartmann (2003) and Olsson (2005) call into question the strength of the connection between coherence and truth. As part of the inquiry into this alleged link, I define a notion of degree of truth-conduciveness, relevant for measuring the usefulness of coherence measures as rules-of-thumb for assigning probabilities in situations of partial knowledge. I use the concept to compare the viability of some of the measures of coherence that have been suggested so (...) far under different circumstances. It turns out that all of these, including the prior, are just about equally good in cases of very little knowledge. Nevertheless, there are differences in when they are applicable, and they also depart more from each other when more knowledge is added. CiteULike Connotea Del.icio.us What's this? (shrink)
This paper presents a conception of the self partially in terms of a particular notion of preference. It develops a coherentist account of when one's preferences are "authorized", or sanctioned as one's own, and presents a coherence theory of autonomous action. The view presented solves certain problems with hierarchical accounts of freedom, such as Harry Frankfurt's.
Over the years several non-equivalent probabilistic measures of coherence have been discussed in the philosophical literature. In this paper we examine these measures with respect to their empirical adequacy. Using test cases from the coherence literature as vignettes for psychological experiments we investigate whether the measures can predict the subjective coherence assessments of the participants. It turns out that the participants’ coherence assessments are best described by Roche’s coherence measure based on Douven and Meijs’ average (...) mutual support approach and the conditional probability. (shrink)
Evolutionary theory coheres with its neighboring theories, such as the theory of plate tectonics, molecular biology, electromagnetic theory, and the germ theory of disease. These neighboring theories were previously unconceived, but they were later conceived, and then they cohered with evolutionary theory. Since evolutionary theory has been strengthened by its several neighboring theories that were previously unconceived, it will be strengthened by infinitely many hitherto unconceived neighboring theories. This argument for evolutionary theory echoes the problem of unconceived alternatives. Ironically, however, (...) the former recommends that we take the realist attitude toward evolutionary theory, while the latter recommends that we take the antirealist attitude toward it. (shrink)
In this paper I discuss the foundations of a formal theory of coherent and conservative belief change that is suitable to be used as a method for constructing iterated changes of belief, sensitive to the history of earlier belief changes, and independent of any form of dispositional coherence. I review various ways to conceive the relationship between the beliefs actually held by an agent and her belief change strategies, show the problems they suffer from, and suggest that belief states (...) should be represented by unary revision functions that take sequences of inputs. Three concepts of coherence implicit in current theories of belief change are distinguished: synchronic, diachronic and dispositional coherence. Diachronic coherence is essentially identified with what is known as conservatism in epistemology. The present paper elaborates on the philosophical motivation of the general framework; formal details and results are provided in a companion paper. (shrink)
For more than three decades, research into the psycholinguistics of pronoun interpretation has argued that hearers use various interpretation ‘preferences’ or ‘strategies’ that are associated with specific linguistic properties of antecedent expressions. This focus is a departure from the type of approach outlined in Hobbs , who argues that the mechanisms supporting pronoun interpretation are driven predominantly by semantics, world knowledge and inference, with particular attention to how these are used to establish the coherence of a discourse. On the (...) basis of three new experimental studies, we evaluate a coherence-driven analysis with respect to four previously proposed interpretation biases—based on grammatical role parallelism, thematic roles, implicit causality, and subjecthood—and argue that the coherence-driven analysis can explain the underlying source of the biases and predict in what contexts evidence for each will surface. The results further suggest that pronoun interpretation is incrementally influenced by probabilistic expectations that hearers have regarding what coherence relations are likely to ensue, together with their expectations about what entities will be mentioned next, which, crucially, are conditioned on those coherence relations. (shrink)
Reliabilism is an intuitive and attractive view about epistemic justification. However, it has many well-known problems. I offer a novel condition on reliabilist theories of justification. This method coherence condition requires that a method be appropriately tested by appeal to a subject’s other belief-forming methods. Adding this condition to reliabilism provides a solution to epistemic circularity worries, including the bootstrapping problem.
This paper explores cosmopsychism’s explanatory aspirations from a programmatic perspective. The bulk of the text consists of an argument in favor of the conclusion that cosmopsychism suffers from no insurmountable individuation problem. I argue that the widespread tendency to view IND as a mirror-image of micropsychism’s combination problem is mistaken. In particular, what renders CP insolvable, namely, the commitment to the coupling of phenomenal constitution with phenomenal inclusion, is, from the standpoint of cosmopsychism, an entirely nonmandatory assumption. I proceed to (...) show that severing this unhappy coupling is the key for defending cosmopsychism against the charge of theoretical incoherence. Moreover, I argue that successful defense against such accusation could be mounted regardless of whether or not we assume cosmic consciousness to be perspectival in nature. In addition, the paper touches upon another foundational issue: cautioning against the popular tendency to identify cosmopsychism’s monism with a mereological unity-in-diversity, and motivating an alternative conception which I call generative monism. Finally, as befitting this volume, I pause to reflect upon the question of which schools of Hindu philosophy might tally best, and connect most fruitfully, with cosmopsychism as I understand it. (shrink)
This paper considers an application of work on probabilistic measures of coherence to inference to the best explanation. Rather than considering information reported from different sources, as is usually the case when discussing coherence measures, the approach adopted here is to use a coherence measure to rank competing explanations in terms of their coherence with a piece of evidence. By adopting such an approach IBE can be made more precise and so a major objection to this (...) mode of reasoning can be addressed. Advantages of the coherence - based approach are pointed out by comparing it with several other ways to characterize ‘ best explanation ’ and showing that it takes into account their insights while overcoming some of their problems. The consequences of adopting this approach for IBE are discussed in the context of recent discussions about the relationship between IBE and Bayesianism. (shrink)
Locke has been accused of failing to have a coherent understanding of consciousness, since it can be identical neither to reflection nor to ordinary perception without contradicting other important commitments. I argue that the account of consciousness is coherent once we see that, for Locke, perceptions of ideas are complex mental acts and that consciousness can be seen as a special kind of self-referential mental state internal to any perception of an idea.
Nonmonotonic reasoning is often claimed to mimic human common sense reasoning. Only a few studies, though, have investigated this claim empirically. We report four experiments which investigate three rules of SYSTEMP, namely the AND, the LEFT LOGICAL EQUIVALENCE, and the OR rule. The actual inferences of the subjects are compared with the coherent normative upper and lower probability bounds derived from a non-infinitesimal probability semantics of SYSTEM P. We found a relatively good agreement of human reasoning and principles of nonmonotonic (...) reasoning. Contrary to the results reported in the ‘heuristics and biases’ tradition, the subjects committed relatively few upper bound violations (conjunction fallacies). (shrink)
A coherent story is a story that fits together well. This notion plays a central role in the coherence theory of justification and has been proposed as a criterion for scientific theory choice. Many attempts have been made to give a probabilistic account of this notion. A proper account of coherence must not start from some partial intuitions, but should pay attention to the role that this notion is supposed to play within a particular context. Coherence is (...) a property of an information set that boosts our confidence that its content is true ceteris paribus when we receive information from independent and partially reliable sources. We construct a measure cr that relies on hypothetical sources with certain idealized characteristics. A maximally coherent information set, i.e. a set with equivalent propositions, affords a maximal confidence boost. cr is the ratio of the actual confidence boost over the confidence boost that we would have received, had the information been presented in the form of maximally coherent information, ceteris paribus. This measure is functionally dependent on the degree of reliability r of the sources. We use cr to construct a coherence quasi-ordering over information sets S and S’: S is no less coherent than S’ just in case c_r is not smaller than c_r for any value of the reliability parameter. We show that, on our account, the coherence of the story about the world gives us a reason to believe that the story is true and that the coherence of a scientific theory, construed as a set of models, is a proper criterion for theory choice. (shrink)
This paper argues for a coherentist theory of the justification of evidentiary judgments in law, according to which a hypothesis about the events being litigated is justified if and only if it is such that an epistemically responsible fact-finder might have accepted it as justified by virtue of its coherence in like circumstances. It claims that this version of coherentism has the resources to address a main problem facing coherence theories of evidence and legal proof, namely, the problem (...) of the coherence bias. The paper then develops an aretaic approach to the standards of epistemic responsibility which govern legal fact-finding. It concludes by exploring some implications of the proposed account of the justification of evidentiary judgments in law for the epistemology of legal proof. (shrink)
The aim of this essay is to develop a coherence theory for the justification of evidentiary judgments in law. The main claim of the coherence theory proposed in this article is that a belief about the events being litigated is justified if and only if it is a belief that an epistemically responsible fact finder might hold by virtue of its coherence in like circumstances. The article argues that this coherentist approach to evidence and legal proof has (...) the resources to meet some of the main objections that may be addressed against attempts to analyze the justification of evidentiary judgments in law in coherentist terms. It concludes by exploring some implications of the proposed version of legal coherentism for a jurisprudence of evidence. (shrink)
Why should we avoid incoherence? An influential view tells us that incoherent combinations of attitudes are such that it is impossible for all of those attitudes to be simultaneously vindicated by the evidence. But it is not clear whether this view successfully explains what is wrong with certain akratic doxastic states. In this paper I flesh out an alternative response to that question, one according to which the problem with incoherent combinations of attitudes is that it is impossible for all (...) of those attitudes to be simultaneously knowledgeable. This alternative response explains what is wrong with akratic combinations of attitudes using commonly accepted epistemological theses. The paper still shows how this proposal is able to explain the badness of incoherent combinations involving the absence of attitudes, suspended judgment and credence. These explanations deploy the notions of knowledge and being in a position to know, instead of the notion of responding properly to the evidence. Finally, I suggest that this picture can be generalized to the realm of practical rationality as well. (shrink)
Being incoherent is often viewed as a paradigm kind of irrationality. Numerous authors attempt to explain the distinct-seeming failure of incoherence by positing a set of requirements of structural rationality. I argue that the notion of coherence that structural requirements are meant to capture is very slippery, and that intuitive judgments – in particular, a charge of a distinct, blatant kind of irrationality – are very imperfectly correlated with respecting the canon of structural requirements. I outline an alternative strategy (...) for explaining our patterns of normative disapproval, one appealing to feasible dispositions to conform to substantive, non-structural norms. A wide range of paradigmatic cases of incoherence, I will argue, involve manifesting problematic dispositions, dispositions that manifest across a range of cases as blatant-seeming normative failures. (shrink)