The paper explores the handling of singular analogy in quantitative inductive logics. It concentrates on two analogical patterns coextensive with the traditional argument from analogy: perfect and imperfect analogy. Each is examined within Carnap’s λ-continuum, Carnap’s and Stegmüller’s λ-η continuum, Carnap’s Basic System, Hintikka’s α-λ continuum, and Hintikka’s and Niiniluoto’s K-dimensional system. Itis argued that these logics handle perfect analogies with ease, and that imperfect analogies, while unmanageable in some logics, are quite manageable in others. The paper concludes with a (...) modification of the K-dimensional system that synthesizes independent proposals by Kuipers and Niiniluoto. (shrink)
Some decisions result in cognitive consequences such as information gained and information lost. The focus of this study, however, is decisions with consequences that are partly or completely noncognitive. These decisions are typically referred to as ‘real-life decisions’. According to a common complaint, the challenges of real-life decision making cannot be met by decision theory. This complaint has at least two principal motives. One is the maximizing objection that to require agents to determine the optimal act under real-world constraints is (...) unrealistic. The other is the precision objection that the numeric requirements for applying decision theory are overly demanding for real-life decisions. Responses to both objections are aired in the History section of this chapter. The maximizing objection is addressed with reference to work by Weirich and Pollock, while the precision objection is countered via a proposal by Kyburg and another by Gärdenfors and Sahlin. However, the Current Research section urges a different response to the precision objection by introducing a comparative version of decision theory. Drawing on Chu and Halpern’s notion of generalized expected utility, this version of decision theory permits many choices to be based on merely comparative plausibilities and utilities. Finally, the Further Research section undertakes an open-ended exploration of three of the assumptions upon which this form of decision theory (and many others) is based: transitivity, independence, and plausibilistic decision rules. (shrink)
This volume recreates the received notion of reflective equilibrium. It reconfigures reflective equilibrium as both a cognitive ideal and a method for approximating this ideal. The ideal of reflective equilibrium is restructured using the concept of discursive strata, which are formed by sentences and differentiated by function. Sentences that perform the same kind of linguistic function constitute a stratum. The book shows how moral discourse can be analyzed into phenomenal, instrumental, and teleological strata, and the ideal of reflective equilibrium reworked (...) in these terms. In addition, the work strengthens the method of reflective equilibrium by harnessing the resources of decision theory and inductive logic. It launches a comparative version of decision theory and employs this framework as a guide to moral theory choice. It also recruits quantitative inductive logic to inform a standard of inductive cogency. When used in tandem with comparative decision theory, this standard can aid in the effort to turn the undesirable condition of reflective disequilibrium into reflective equilibrium. (shrink)
Most ethical decisions are conditioned by formidable uncertainty. Decision makers may lack reliable information about relevant facts, the consequences of actions, and the reactions of other people. Resources for dealing with uncertainty are available from standard forms of decision theory, but successful application to decisions under risk requires a great deal of quantitative information: point-valued probabilities of states and point-valued utilities of outcomes. When this information is not available, this paper recommends the use of a form of decision theory that (...) operates on a bare minimum of information inputs: comparative plausibilities of states and comparative utilities of outcomes. In addition, it proposes a comparative strategy for dealing with second-order uncertainty. The paper illustrates its proposal with reference to a well-known ethical dilemma: Kant’s life-saving lie. (shrink)
The focus of this study is cognitive choice: the selection of one cognitive option (a hypothesis, a theory, or an axiom, for instance) rather than another. The study proposes that cognitive choice should be based on the plausibilities of states posited by rival cognitive options and the utilities of these options' information outcomes. The proposal introduces a form of decision theory that is novel because comparative; it permits many choices among cognitive options to be based on merely comparative plausibilities and (...) utilities. This form of decision theory intersects with recommendations by advocates of decision theory for cognitive choice, on the one hand, and defenders of comparative evaluation of scientific hypotheses and theories, on the other. But it differs from prior decision-theoretic proposals because it requires no more than minimal precision in specifying plausibilities and utilities. And it differs from comparative proposals because none has shown how comparative evaluations can be carried out within a decision-theoretic framework. (shrink)
Theory choice can be approached in at least four ways. One of these calls for the application of decision theory, and this article endorses this approach. But applying standard forms of decision theory imposes an overly demanding standard of numeric information, supposedly satisfied by point-valued utility and probability functions. To ameliorate this difficulty, a version of decision theory that requires merely comparative utilities and plausibilities is proposed. After a brief summary of this alternative, the article illustrates how comparative decision theory (...) affords a rational reconstruction of decisions made by exemplary scientists in two cases of theory choice: Buffon’s law and the luminiferous ether. It also offers a rational reconstruction of two cases of theory diagnosis: Mendeleev’s anomalies and the Pioneer anomaly. (shrink)
This article tackles a number of puzzles related to Aristotle’s practical syllogism, notably the relationship between deliberation and the practical syllogism, the distinction between deliberative and reconstructive practical syllogisms, and the nature of the conclusion of the practical syllogism.
Theory choice can be approached in at least four ways. One of these calls for the application of decision theory, and this article endorses this approach. But applying standard forms of decision theory imposes an overly demanding standard of numeric information, supposedly satisfied by point-valued utility and probability functions. To ameliorate this difficulty, a version of decision theory that requires merely comparative utilities and plausibilities is proposed. After a brief summary of this alternative, the article illustrates how comparative decision theory (...) affords a rational reconstruction of decisions made by exemplary scientists in two cases of theory choice: Buffon’s law and the luminiferous ether. It also offers a rational reconstruction of two cases of theory diagnosis: Mendeleev’s anomalies and the Pioneer anomaly. (shrink)
This chapter explores arguments from analogy containing ethical predicates like 'just', 'courageous', and 'honest'. The approach is Wittgensteinian in a double sense. The role of paradigm cases in ethical discourse is emphasized, first of all, and the inductive logics to be employed spring from Wittgenstein's remarks on probability (1922). Although these logics rely on a semantic concept of range, they yield results for the ethical problems treated here only if grounded in certain kinds of pragmatic consensus.
Llull and Leibniz both subscribed to conceptual atomism: the belief that the majority of concepts are compounds constructed from a relatively small number of primitive concepts. Llull worked out techniques for finding the logically possible combinations of his primitives, but Leibniz criticized Llull’s execution of these techniques. This article argues that Leibniz was right about things being more complicated than Llull thought but that he was wrong about the details. The paper attempts to correct these details.
This chapter identifies two types of moral dilemma. The first type is described as ethical clash: whether affirmative action is just or unjust, for example, or whether withholding information from an inquisitive relative is honest or dishonest. In these cases the dilemma takes the form of conflict between an ethical predicate and its complement. The second type of moral dilemma is ethical overlap. Instead of a clash between a single predicate and its complement, here two or more predicates apply. Dilemmas (...) associated with white lies, for example, often depart from the recognition that such acts are both dishonest and avoid inflicting pain. Similarly, social dilemmas over progressive taxation may arise despite agreement that progressive systems both decrease liberty and increase equality. Which predicate should take precedence? Strategies for dealing with both types of dilemma are proposed. (shrink)
This chapter has two objectives. The first is to clarify Aristotle’s view of the first principles of the sciences. The second is to stake out a critical position with respect to this view. The paper sketches an alternative to Aristotle’s intuitionism based in part on the use of quantitative inductive logics.
This article sketches descriptive and normative components of a theory of ethical value. The normative component, which receives the lion’s share of attention, is developed by adapting Laudan’s levels of scientific discourse. The resulting levels of ethical discourse can be critically addressed through the use of inductive inference, falsification, and causal inference. These techniques are likewise appropriate to the corresponding levels of scientific discourse.
The new evidence scholarship addresses three distinct approaches: legal probabilism, Bayesian decision theory and relative plausibility theory. Each has major insights to offer, but none seems satisfactory as it stands. This paper proposes that relative plausibility theory be modified in two substantial ways. The first is by defining its key concept of plausibility, hitherto treated as primitive, by generalising the standard axioms of probability. The second is by complementing the descriptive component of the theory with a normative decision theory adapted (...) to legal process. Because this version of decision theory is based on plausibilities rather than probabilities, it generates plausibilistic expectations as outputs. Because these outputs are comparable, they function as relative plausibilities. Hence the resulting framework is an extension of relative plausibility theory, but it retains deep ties to legal probabilism, through the proposed definition of plausibility, and to Bayesian decision theory, through the normative use of decision theory. (shrink)
Some arguments are good; others are not. How can we tell the difference? This article advances three proposals as a partial answer to this question. The proposals are keyed to arguments conditioned by different degrees of uncertainty: mild, where the argument’s premises are hedged with point-valued probabilities; moderate, where the premises are hedged with interval probabilities; and severe, where the premises are hedged with non-numeric plausibilities such as ‘very likely’ or ‘unconfirmed’. For mild uncertainty, the article proposes to apply a (...) principle referred to as ‘Jeffrey’s rule’, for the principle is a generalization of Jeffrey conditionalization. For moderate uncertainty, the proposal is to extend Jeffrey’s rule for use with probability intervals. For severe uncertainty, the article proposes that even when lack of probabilistic information prevents the application of Jeffrey’s rule, the rule can be adapted to these conditions with the aid of a suitable plausibility measure. Together, the three proposals introduce an approach to argument evaluation that complements established frameworks for evaluating arguments: deductive soundness, informal logic, argumentation schemes, pragma-dialectics, and Bayesian inference. Nevertheless, this approach can be looked at as a generalization of the truth and validity conditions of the classical criterion for sound argumentation. (shrink)
Why should coherence be an epistemic desideratum? One response is that coherence is truth-conducive: mutually coherent propositions are more likely to be true, ceteris paribus, than mutually incoherent ones. But some sets of propositions are more coherent, while others are less so. How could coherence be measured? Probabilistic measures of coherence exist; some are identical to probabilistic measures of confirmation, while others are extensions of such measures. Probabilistic measures of coherence are fine when applicable, but many situations are so information-poor (...) that the requisite probabilities cannot be determined. To measure coherence in these cognitively impoverished situations, this article proposes that the discussion be broadened to include plausibilistic measures of coherence. It shows how plausibilistic measures of coherence can be defined using plausibilistic measures of confirmation. It then illustrates how plausibilisic coherence can be measured in situations where probabilistic coherence cannot be determined. The coherence values obtained through the use of plausibilistic measures are often, though not always, comparable. The article also shows that coherence can be instantiated on different levels, one of which permits connections to inductive strength and deductive validity. (shrink)
How can the objectivity of an argument’s conclusion be determined? To propose an answer, this paper builds on Betz’s view of premises as hedged hypotheses. If an argument’s premises are hedged, its conclusion must be hedged as well. But how? The paper first introduces a two-dimensional critical grid. The grid’s vertical dimension is inductive, reflecting the argument’s downward flow from premises to conclusion. It specifies the inductive probability of the conclusion given the premises. The grid’s horizontal dimension is epistemic, focusing (...) on the premises without dropping down to the conclusion. It evaluates the epistemic probability of the premises when conjoined. This two-dimensional grid is then applied to three kinds of cases: vertical and horizontal evaluations rely on point-valued probabilities; vertical and horizontal evaluations rely on interval probabilities; vertical and horizontal evaluations rely on non-numeric plausibilities. The result is that, in each case, the argument’s conclusion can be assigned a credence tag, as it were, that reflects a critical appraisal of its objectivity. Reference Betz, Gregor. 2013. “In defence of the value free ideal.” European Journal for Philosophy of Science 3, 207–220. (shrink)
Human decisions are conditioned by formidable uncertainty. The standard resource for dealing rationally with uncertainty is the mathematical concept of probability. The probability calculus is well-known, but since the numerical demands for applying it cannot usually be met, it is not widely applicable. By contrast, the concept of plausibility is widely applicable, but it is little known. This book relies on a generalized concept of plausibility whose strength is its adaptability. The adaptability is due to a novel form of decision (...) theory that takes plausibilities as inputs. This form of decision theory can be adapted to decisions informed by sharp probabilities and utilities as well as to decisions that must be made without them. The book illustrates its application to problems in argumentation theory, scientific theory choice, risk management, ethics, law, economics, and epistemology. (shrink)
Reyes Mate's Memory of the West looks back in order to look forward. It is a sustained reflection on the great disillusion Europe experienced after World War I. Europeans understood that bombs had buried the Enlightenment. They knew that, to avoid catastrophe, they had to think anew. The catastrophe came, but Cohen, Benjamin, Kafka, and Rosenzweig had sounded the warning.
Other Voices: Readings in Spanish Philosophy represents high points of nearly two millennia of Spanish philosophy, from first-century thinkers in Roman Hispania to those of the twentieth century. John R. Welch has selected, and in several cases translated, excerpts from the works of thirteen philosophers: Seneca, Quintilian, Isidore of Seville, Ibn Rushd (Averroës), Moses Maimonides, Ramón Llull, Juan Luis Vives, Francisco de Vitoria, Bartolomé de Las Casas, Francisco Suárez, Benito Jerónimo Feijóo, Miguel de Unamuno, and José Ortega y Gasset. Welch (...) provides a brief introduction to each historical period or philosophical movement represented and a biographical introduction to each philosopher. Of special interest are the selection from Feijóo’s “A Defense of Women” (an attack on misogyny), which has not been translated into English since the eighteenth century; the arguments on the justification of war by Vitoria and Las Casas (in the context of the Spanish conquest); and Ortega's defense of a specific form of reason: historical reason. -/- . (shrink)
According to Quine, terms of divided reference like 'rabbit' have two sorts of problems: problems of direct and deferred ostension. Hence the reference of these terms is inscrutable. This article holds that the problems of deferred ostension can be handled by Goodman's theory of projection, and that the problems of direct ostension turn out to be pedestrian problems of signs.
Ramon Llull was acutely aware of Islamic and Jewish divergences from Christian belief. He undertook a quest for "necessary reasons" to show that, where these belief systems diverged, Christian belief is true. Though largely self-taught, Llull managed three stays at the University of Paris. Encounters between the incandescent Mallorcan and academic orthodoxy contributed hugely to Llull's changing conception of necessary reasons. These changes are abundantly documented in Anthony Bonner's The Art and Logic of Ramon Llull.Llull's understanding of necessary reasons is (...) different in the Ars magna's quaternary phase, its ternary phase, and a post-Art logical phase. In the quaternary phase, necessary reasons are often presented as demonstrative; fully ten works use the term in their title. Although Llull meant 'demonstration' in a sense broader than Aristotle's, Parisian academics objected, mistakenly thinking he was trying to prove dogmas like the Trinity and Incarnation through Aristotelian demonstrations propter quid and quia. As Bonner conjectures, this may be why no work from the ternary phase mentions. (shrink)
This article defends the philosophy of the Renaissance against a critique by Ortega y Gasset. Renaissance philosophy, it is argued, was a rebirth of the Hellenistic and Roman conviction that theory should not be pursued for its own sake; rather, it should be kept on a short leash controlled by practical ends. This Renaissance view is a precursor to the contemporary anti-theory of thinkers like Aranguren, Toulmin, and Williams.
How do we distinguish good and bad analogies? Luis A. Camacho proposed that false analogies be construed as false material conditionals. This article offers a counter-proposal: analogies of all sorts can be understood as singular inductive inferences. For the sake of simplicity, this proposal is illustrated with reference to Carnap's favorite inductive method c*.
Henrike Jansen’s “The strategic formulation of abductive arguments in everyday reasoning” insightfully explores the terrain of abductive argumentation. The purpose of this note is to continue the exploration along lines marked out by her paper. This further exploration proceeds in two stages. Section 2 of the paper addresses the nature of abductive inference by distinguishing two types of abduction, identifying some of abduction’s formal and nonformal properties, and relating abduction to enthymematic inference. Section 3 focuses on some of Jansen’s examples, (...) paying particular attention to the distinction between abduction and argument from sign. Whereas Jansen maintains that some arguments from sign are not abductive, the paper suggests an alternative perspective from which arguments from sign can generally be viewed as one sort of abductive inference. (shrink)
Javier Muguerza’s Ethics and Perplexity makes a highly original contribution to the debate over dialogical reason. The work opens with a letter that establishes a parallel between Ethics and Perplexity and Maimonides’s classic Guide of the Perplexed. It concludes with an interview that repeatedly strikes sparks on Spanish philosophy’s emergence from its “long quarantine,” as Muguerza puts it. These informal pieces—witty, informative, conversational—orbit the nucleus of the work: a formidable critique of dialogical reason. The result is a volume by turns (...) vivid and profound. (shrink)
Individual people are morally responsible. But can groups of people - corporations and nations, for example - be morally responsible as well? An affirmative answer has been defended by appealing to two criteria, here identified as the turnover test and the distribution test. The article argues for a Scotch verdict: neither criterion proves the point.