Suppose one hundred prisoners are in a yard under the supervision of a guard, and at some point, ninety-nine of them collectively kill the guard. If, after the fact, a prisoner is picked at random and tried, the probability of his guilt is 99%. But despite the high probability, the statistical chances, by themselves, seem insufficient to justify a conviction. The question is why. Two arguments are offered. The first, decision-theoretic argument shows that a conviction solely based on the statistics (...) in the prisoner scenario is unacceptable so long as the goal of expected utility maximization is combined with fairness constraints. The second, risk-based argument shows that a conviction solely based on the statistics in the prisoner scenario lets the risk of mistaken conviction surge potentially too high. The same, by contrast, cannot be said of convictions solely based on DNA evidence or eyewitness testimony. A noteworthy feature of the two arguments in the paper is that they are not confined to criminal trials and can in fact be extended to civil trials. (shrink)
Many oppose the use of profile evidence against defendants at trial, even when the statistical correlations are reliable and the jury is free from prejudice. The literature has struggled to justify this opposition. We argue that admitting profile evidence is objectionable because it violates what we call “equal protection”—that is, a right of innocent defendants not to be exposed to higher ex ante risks of mistaken conviction compared to other innocent defendants facing similar charges. We also show why admitting other (...) forms of evidence, such as eyewitness, trace, and motive evidence, does not violate equal protection. (shrink)
Smith argues that, unlike other forms of evidence, naked statistical evidence fails to satisfy normic support. This is his solution to the puzzles of statistical evidence in legal proof. This paper focuses on Smith’s claim that DNA evidence in cold-hit cases does not satisfy normic support. I argue that if this claim is correct, virtually no other form of evidence used at trial can satisfy normic support. This is troublesome. I discuss a few ways in which Smith can respond.
Many philosophers have pointed out that statistical evidence, or at least some forms of it, lack desirable epistemic or non-epistemic properties, and that this should make us wary of litigations in which the case against the defendant rests in whole or in part on statistical evidence. Others have responded that such broad reservations about statistical evidence are overly restrictive since appellate courts have expressed nuanced views about statistical evidence. In an effort to clarify and reconcile, I put forward an interpretive (...) analysis of why statistical evidence should raise concerns in some cases but not others. I argue that when there is a mismatch between the specificity of the evidence and the expected specificity of the accusation, statistical evidence—as any other kind of evidence—should be considered insufficient to sustain a conviction. I rely on different stylized court cases to illustrate the explanatory power of this analysis. (shrink)
According to the principle of epistemic closure, knowledge is closed under known implication. The principle is intuitive but it is problematic in some cases. Suppose you know you have hands and you know that ‘I have hands’ implies ‘I am not a brain-in-a-vat’. Does it follow that you know you are not a brain-in-a-vat? It seems not; it should not be so easy to refute skepticism. In this and similar cases, we are confronted with a puzzle: epistemic closure is an (...) intuitive principle, but at times, it does not seem that we know by implication. In response to this puzzle, the literature has been mostly polarized between those who are willing to do away with epistemic closure and those who think we cannot live without it. But there is a third way. Here I formulate a restricted version of the principle of epistemic closure. In the standard version, the principle can range over any proposition; in the restricted version, it can only range over those propositions that are within the limits of a given epistemic inquiry and that do not constitute the underlying assumptions of the inquiry. If we adopt the restricted version, I argue, we can preserve the advantages associated with closure, while at the same time avoiding the puzzle I’ve described. My discussion also yields an insight into the nature of knowledge. I argue that knowledge is best understood as a topic-restricted notion, and that such a conception is a natural one given our limited cognitive resources. (shrink)
This note discusses three issues that Allen and Pardo believe to be especially problematic for a probabilistic interpretation of standards of proof: (1) the subjectivity of probability assignments; (2) the conjunction paradox; and (3) the non-comparative nature of probabilistic standards. I offer a reading of probabilistic standards that avoids these criticisms.
The primary aim of this chapter is to explain the nature of evidential reasoning, the characteristic difficulties encountered, and the tools to address these difficulties. Our focus is on evidential reasoning in criminal cases. There is an extensive scholarly literature on these topics, and it is a secondary aim of the chapter to provide readers the means to find their way in historical and ongoing debates.
The literature on algorithmic fairness has examined exogenous sources of biases such as shortcomings in the data and structural injustices in society. It has also examined internal sources of bias as evidenced by a number of impossibility theorems showing that no algorithm can concurrently satisfy multiple criteria of fairness. This paper contributes to the literature stemming from the impossibility theorems by examining how informational richness affects the accuracy and fairness of predictive algorithms. With the aid of a computer simulation, we (...) show that informational richness is the engine that drives improvements in the performance of a predictive algorithm, in terms of both accuracy and fairness. The centrality of informational richness suggests that classification parity, a popular criterion of algorithmic fairness, should be given relatively little weight. But we caution that the centrality of informational richness should be taken with a grain of salt in light of practical limitations, in particular, the so-called bias-variance trade off. (shrink)
The legal scholar Henry Wigmore asserted that cross-examination is ‘the greatest legal engine ever invented for the discovery of truth.’ Was Wigmore right? Instead of addressing this question upfront, this paper offers a conceptual ground clearing. It is difficult to say whether Wigmore was right or wrong without becoming clear about what we mean by cross-examination; how it operates at trial; what it is intended to accomplish. Despite the growing importance of legal epistemology, there is virtually no philosophical work that (...) discusses cross-examination, its scope and function at trial. This paper makes a first attempt at clearing the ground by articulating an analysis of cross-examination using probability theory and Bayesian networks. This analysis relies on the distinction between undercutting and rebutting evidence. A preliminary assessment of the truth-seeking function of cross-examination is offered at the end of the paper. (shrink)
I comment on two analyses of the Simonshaven case: one by Prakken (2019), based on arguments, and the other by van Koppen and Mackor (2019), based on scenarios (or stories, narratives). I argue that both analyses lack a clear account of proof beyond a reasonable doubt because they lack a clear account of the notion of plausibility. To illustrate this point, I focus on the defense argument during the appeal trial and show that both analyses face difficulties in modeling key (...) features of this argument. (shrink)
Epistemic closure under known implication is the principle that knowledge of "p" and knowledge of "p implies q", together, imply knowledge of "q". This principle is intuitive, yet several putative counterexamples have been formulated against it. This paper addresses the question, why is epistemic closure both intuitive and prone to counterexamples? In particular, the paper examines whether probability theory can offer an answer to this question based on four strategies. The first probability-based strategy rests on the accumulation of risks. The (...) problem with this strategy is that risk accumulation cannot accommodate certain counterexamples to epistemic closure. The second strategy is based on the idea of evidential support, that is, a piece of evidence supports a proposition whenever it increases the probability of the proposition. This strategy makes progress and can accommodate certain putative counterexamples to closure. However, this strategy also gives rise to a number of counterintuitive results. Finally, there are two broadly probabilistic strategies, one based on the idea of resilient probability and the other on the idea of assumptions that are taken for granted. These strategies are promising but are prone to some of the shortcomings of the second strategy. All in all, I conclude that each strategy fails. Probability theory, then, is unlikely to offer the account we need. (shrink)
This handbook offers a deep analysis of the main forms of legal reasoning and argumentation from both a logical-philosophical and legal perspective. These forms are covered in an exhaustive and critical fashion, and the handbook accordingly divides in three parts: the first one introduces and discusses the basic concepts of practical reasoning. The second one discusses the main general forms of reasoning and argumentation relevant for legal discourse. The third one looks at their application in law as well as at (...) the different areas of legal reasoning. The handbook’s division in three parts reflects its conceptual architecture, since legal reasoning and argumentation are considered in relation to the more general types of reasoning. (shrink)