Existing research suggests that people's judgments of actual causation can be influenced by the degree to which they regard certain events as normal. We develop an explanation for this phenomenon that draws on standard tools from the literature on graphical causal models and, in particular, on the idea of probabilistic sampling. Using these tools, we propose a new measure of actual causal strength. This measure accurately captures three effects of normality on causal judgment that have been observed in existing studies. (...) More importantly, the measure predicts a new effect ("abnormal deflation"). Two studies show that people's judgments do, in fact, show this new effect. Taken together, the patterns of people's causal judgments thereby provide support for the proposed explanation. (shrink)
In Pietroski ( 2018 ) a simple representation language called SMPL is introduced, construed as a hypothesis about core conceptual structure. The present work is a study of this system from a logical perspective. In addition to establishing a completeness result and a complexity characterization for reasoning in the system, we also pinpoint its expressive limits, in particular showing that the fourth corner in the square of opposition (“ Some_not ”) eludes expression. We then study a seemingly small extension, called (...) SMPL +, which allows for a minimal predicate-binding operator. Perhaps surprisingly, the resulting system is shown to encode precisely the concepts expressible in first-order logic. However, unlike the latter class, the class of SMPL + expressions admits a simple procedural (context-free) characterization. Our contribution brings together research strands in logic—including natural logic, modal logic, description logic, and hybrid logic—with recent advances in semantics and philosophy of language. (shrink)
While Bayesian models have been applied to an impressive range of cognitive phenomena, methodological challenges have been leveled concerning their role in the program of rational analysis. The focus of the current article is on computational impediments to probabilistic inference and related puzzles about empirical confirmation of these models. The proposal is to rethink the role of Bayesian methods in rational analysis, to adopt an independently motivated notion of rationality appropriate for computationally bounded agents, and to explore broad conditions under (...) which Bayesian agents would be rational. The proposal is illustrated with a characterization of costs inspired by thermodynamics. (shrink)
A well-known open problem in epistemic logic is to give a syntactic characterization of the successful formulas. Semantically, a formula is successful if and only if for any pointed model where it is true, it remains true after deleting all points where the formula was false. The classic example of a formula that is not successful in this sense is the “Moore sentence” p ∧ ¬BOXp, read as “p is true but you do not know p.” Not only is the (...) Moore sentence unsuccessful, it is self-refuting, for it never remains true as described. We show that in logics of knowledge and belief for a single agent (extended by S5), Moorean phenomena are the source of all self-refutation; moreover, in logics for an introspective agent (extending KD45), Moorean phenomena are the source of all unsuccessfulness as well. This is a distinctive feature of such logics, for with a non-introspective agent or multiple agents, non-Moorean unsuccessful formulas appear. We also consider how successful and self-refuting formulas relate to the Cartesian and learnable formulas, which have been discussed in connection with Fitch’s “paradox of knowability.” We show that the Cartesian formulas are exactly the formulas that are not eventually self-refuting and that not all learnable formulas are successful. In an appendix, we give syntactic characterizations of the successful and the self-refuting formulas. (shrink)
In this paper, we explore semantics for comparative epistemic modals that avoid the entailment problems shown to result from Kratzer’s (1991) semantics by Yalcin (2006, 2009, 2010). In contrast to the alternative semantics presented by Yalcin and Lassiter (2010, 2011), based on finitely additive probability measures, we introduce semantics based on qualitatively additive measures, as well as semantics based on purely qualitative orderings, including orderings on propositions derived from orderings on worlds in the tradition of Kratzer (1991). All of these (...) semantics avoid the entailment problems that result from Kratzer’s semantics. Our discussion focuses on methodological issues concerning the choice between different semantics. (shrink)
This paper studies connections between two alternatives to the standard probability calculus for representing and reasoning about uncertainty: imprecise probability andcomparative probability. The goal is to identify complete logics for reasoning about uncertainty in a comparative probabilistic language whose semantics is given in terms of imprecise probability. Comparative probability operators are interpreted as quantifying over a set of probability measures. Modal and dynamic operators are added for reasoning about epistemic possibility and updating sets of probability measures.
We prove that the generalized cancellation axiom for incomplete comparative probability relations introduced by Rios Insua and Alon and Lehrer is stronger than the standard cancellation axiom for complete comparative probability relations introduced by Scott, relative to their other axioms for comparative probability in both the finite and infinite cases. This result has been suggested but not proved in the previous literature.
While pragmatic arguments for numerical probability axioms have received much attention, justifications for axioms of qualitative probability have been less discussed. We offer an argument for the requirement that an agent’s qualitative judgments be probabilistically representable, inspired by, but importantly different from, the Money Pump argument for transitivity of preference and Dutch book arguments for quantitative coherence. The argument is supported by a theorem, to the effect that a subject is systematically susceptible to dominance given her preferred acts, if and (...) only if the subject’s comparative judgments preclude representation by a standard probability measure. (shrink)
Unlike standard modal logics, many dynamic epistemic logics are not closed under uniform substitution. A distinction therefore arises between the logic and its substitution core, the set of formulas all of whose substitution instances are valid. The classic example of a non-uniform dynamic epistemic logic is Public Announcement Logic (PAL), and a well-known open problem is to axiomatize the substitution core of PAL. In this paper we solve this problem for PAL over the class of all relational models with infinitely (...) many agents, PAL-K_omega, as well as standard extensions thereof, e.g., PAL-T_omega, PAL-S4_omega, and PAL-S5_omega. We introduce a new Uniform Public Announcement Logic (UPAL), prove completeness of a deductive system with respect to UPAL semantics, and show that this system axiomatizes the substitution core of PAL. (shrink)
Existing research has shown that norm violations influence causal judgments, and a number of different models have been developed to explain these effects. One such model, the necessity/sufficiency model, predicts an interac- tion pattern in people’s judgments. Specifically, it predicts that when people are judging the degree to which a particular factor is a cause, there should be an interaction between (a) the degree to which that factor violates a norm and (b) the degree to which another factor in the (...) situation violates norms. A study of moral norms (N = 1000) and norms of proper functioning (N = 3000) revealed robust evidence for the predicted interaction effect. The implications of these patterns for existing theories of causal judgments is discussed. (shrink)
The semantic automata framework, developed originally in the 1980s, provides computational interpretations of generalized quantifiers. While recent experimental results have associated structural features of these automata with neuroanatomical demands in processing sentences with quantifiers, the theoretical framework has remained largely unexplored. In this paper, after presenting some classic results on semantic automata in a modern style, we present the first application of semantic automata to polyadic quantification, exhibiting automata for iterated quantifiers. We also discuss the role of semantic automata in (...) linguistic theory and offer new empirical predictions for sentence processing with embedded quantifiers. (shrink)
The provability logic of a theory $T$ is the set of modal formulas, which under any arithmetical realization are provable in $T$. We slightly modify this notion by requiring the arithmetical realizations to come from a specified set $\Gamma$. We make an analogous modification for interpretability logics. We first study provability logics with restricted realizations and show that for various natural candidates of $T$ and restriction set $\Gamma$, the result is the logic of linear frames. However, for the theory Primitive (...) Recursive Arithmetic (PRA), we define a fragment that gives rise to a more interesting provability logic by capitalizing on the well-studied relationship between PRA and I$\Sigma_1$. We then study interpretability logics, obtaining upper bounds for IL(PRA), whose characterization remains a major open question in interpretability logic. Again this upper bound is closely related to linear frames. The technique is also applied to yield the nontrivial result that IL(PRA) $\subset$ ILM. (shrink)
We present a formal system for reasoning about inclusion and exclusion in natural language, following work by MacCartney and Manning. In particular, we show that an extension of the Monotonicity Calculus, augmented by six new type markings, is sufficient to derive novel inferences beyond monotonicity reasoning, and moreover gives rise to an interesting logic of its own. We prove soundness of the resulting calculus and discuss further logical and linguistic issues, including a new connection to the classes of weak, strong, (...) and superstrong negative polarity items. (shrink)
Theories of rational decision making often abstract away from computational and other resource limitations faced by real agents. An alternative approach known as resource rationality puts such matters front and center, grounding choice and decision in the rational use of finite resources. Anticipated by earlier work in economics and in computer science, this approach has recently seen rapid development and application in the cognitive sciences. Here, the theory of rationality plays a dual role, both as a framework for normative assessment (...) and as a source of scientific hypotheses about how mental processes in fact work. The latter project, often called rational analysis, depends for its success on a fine-grained characterization of the computational problem facing a decision maker, which may in turn depend on realistic assumptions about what the relevant agent is like. As a consequence, resource rationality involves a delicate, but often fruitful interplay between the normative and the descriptive. (shrink)