Existing research suggests that people's judgments of actual causation can be influenced by the degree to which they regard certain events as normal. We develop an explanation for this phenomenon that draws on standard tools from the literature on graphical causal models and, in particular, on the idea of probabilistic sampling. Using these tools, we propose a new measure of actual causal strength. This measure accurately captures three effects of normality on causal judgment that have been observed in existing studies. (...) More importantly, the measure predicts a new effect ("abnormal deflation"). Two studies show that people's judgments do, in fact, show this new effect. Taken together, the patterns of people's causal judgments thereby provide support for the proposed explanation. (shrink)
When does it make sense to act randomly? A persuasive argument from Bayesian decision theory legitimizes randomization essentially only in tie-breaking situations. Rational behaviour in humans, non-human animals, and artificial agents, however, often seems indeterminate, even random. Moreover, rationales for randomized acts have been offered in a number of disciplines, including game theory, experimental design, and machine learning. A common way of accommodating some of these observations is by appeal to a decision-maker’s bounded computational resources. Making this suggestion both precise (...) and compelling is surprisingly difficult. Toward this end, I propose two fundamental rationales for randomization, drawing upon diverse ideas and results from the wider theory of computation. The first unifies common intuitions in favour of randomization from the aforementioned disciplines. The second introduces a deep connection between randomization and memory: access to a randomizing device is provably helpful for an agent burdened with a finite memory. Aside from fit with ordinary intuitions about rational action, the two rationales also make sense of empirical observations in the biological world. Indeed, random behaviour emerges more or less where it should, according to the proposal. (shrink)
Subjective probability plays an increasingly important role in many fields concerned with human cognition and behavior. Yet there have been significant criticisms of the idea that probabilities could actually be represented in the mind. This paper presents and elaborates a view of subjective probability as a kind of sampling propensity associated with internally represented generative models. The resulting view answers to some of the most well known criticisms of subjective probability, and is also supported by empirical work in neuroscience and (...) behavioral psychology. The repercussions of the view for how we conceive of many ordinary instances of subjective probability, and how it relates to more traditional conceptions of subjective probability, are discussed in some detail. (shrink)
A well-known open problem in epistemic logic is to give a syntactic characterization of the successful formulas. Semantically, a formula is successful if and only if for any pointed model where it is true, it remains true after deleting all points where the formula was false. The classic example of a formula that is not successful in this sense is the “Moore sentence” p ∧ ¬BOXp, read as “p is true but you do not know p.” Not only is the (...) Moore sentence unsuccessful, it is self-refuting, for it never remains true as described. We show that in logics of knowledge and belief for a single agent (extended by S5), Moorean phenomena are the source of all self-refutation; moreover, in logics for an introspective agent (extending KD45), Moorean phenomena are the source of all unsuccessfulness as well. This is a distinctive feature of such logics, for with a non-introspective agent or multiple agents, non-Moorean unsuccessful formulas appear. We also consider how successful and self-refuting formulas relate to the Cartesian and learnable formulas, which have been discussed in connection with Fitch’s “paradox of knowability.” We show that the Cartesian formulas are exactly the formulas that are not eventually self-refuting and that not all learnable formulas are successful. In an appendix, we give syntactic characterizations of the successful and the self-refuting formulas. (shrink)
While Bayesian models have been applied to an impressive range of cognitive phenomena, methodological challenges have been leveled concerning their role in the program of rational analysis. The focus of the current article is on computational impediments to probabilistic inference and related puzzles about empirical confirmation of these models. The proposal is to rethink the role of Bayesian methods in rational analysis, to adopt an independently motivated notion of rationality appropriate for computationally bounded agents, and to explore broad conditions under (...) which Bayesian agents would be rational. The proposal is illustrated with a characterization of costs inspired by thermodynamics. (shrink)
In this paper, we explore semantics for comparative epistemic modals that avoid the entailment problems shown to result from Kratzer’s (1991) semantics by Yalcin (2006, 2009, 2010). In contrast to the alternative semantics presented by Yalcin and Lassiter (2010, 2011), based on finitely additive probability measures, we introduce semantics based on qualitatively additive measures, as well as semantics based on purely qualitative orderings, including orderings on propositions derived from orderings on worlds in the tradition of Kratzer (1991). All of these (...) semantics avoid the entailment problems that result from Kratzer’s semantics. Our discussion focuses on methodological issues concerning the choice between different semantics. (shrink)
We prove that the generalized cancellation axiom for incomplete comparative probability relations introduced by Rios Insua and Alon and Lehrer is stronger than the standard cancellation axiom for complete comparative probability relations introduced by Scott, relative to their other axioms for comparative probability in both the finite and infinite cases. This result has been suggested but not proved in the previous literature.
While pragmatic arguments for numerical probability axioms have received much attention, justifications for axioms of qualitative probability have been less discussed. We offer an argument for the requirement that an agent’s qualitative judgments be probabilistically representable, inspired by, but importantly different from, the Money Pump argument for transitivity of preference and Dutch book arguments for quantitative coherence. The argument is supported by a theorem, to the effect that a subject is systematically susceptible to dominance given her preferred acts, if and (...) only if the subject’s comparative judgments preclude representation by a standard probability measure. (shrink)
Recent ideas about epistemic modals and indicative conditionals in formal semantics have significant overlap with ideas in modal logic and dynamic epistemic logic. The purpose of this paper is to show how greater interaction between formal semantics and dynamic epistemic logic in this area can be of mutual benefit. In one direction, we show how concepts and tools from modal logic and dynamic epistemic logic can be used to give a simple, complete axiomatization of Yalcin's [16] semantic consequence relation for (...) a language with epistemic modals and indicative conditionals. In the other direction, the formal semantics for indicative conditionals due to Kolodny and MacFarlane [9] gives rise to a new dynamic operator that is very natural from the point of view of dynamic epistemic logic, allowing succinct expression of dependence (as in dependence logic) or supervenience statements. We prove decidability for the logic with epistemic modals and Kolodny and MacFarlane's indicative conditional via a full and faithful computable translation from their logic to the modal logic K45. (shrink)
Unlike standard modal logics, many dynamic epistemic logics are not closed under uniform substitution. A distinction therefore arises between the logic and its substitution core, the set of formulas all of whose substitution instances are valid. The classic example of a non-uniform dynamic epistemic logic is Public Announcement Logic (PAL), and a well-known open problem is to axiomatize the substitution core of PAL. In this paper we solve this problem for PAL over the class of all relational models with infinitely (...) many agents, PAL-K_omega, as well as standard extensions thereof, e.g., PAL-T_omega, PAL-S4_omega, and PAL-S5_omega. We introduce a new Uniform Public Announcement Logic (UPAL), prove completeness of a deductive system with respect to UPAL semantics, and show that this system axiomatizes the substitution core of PAL. (shrink)
This paper studies connections between two alternatives to the standard probability calculus for representing and reasoning about uncertainty: imprecise probability andcomparative probability. The goal is to identify complete logics for reasoning about uncertainty in a comparative probabilistic language whose semantics is given in terms of imprecise probability. Comparative probability operators are interpreted as quantifying over a set of probability measures. Modal and dynamic operators are added for reasoning about epistemic possibility and updating sets of probability measures.
The semantic automata framework, developed originally in the 1980s, provides computational interpretations of generalized quantifiers. While recent experimental results have associated structural features of these automata with neuroanatomical demands in processing sentences with quantifiers, the theoretical framework has remained largely unexplored. In this paper, after presenting some classic results on semantic automata in a modern style, we present the first application of semantic automata to polyadic quantification, exhibiting automata for iterated quantifiers. We also discuss the role of semantic automata in (...) linguistic theory and offer new empirical predictions for sentence processing with embedded quantifiers. (shrink)
The problem of inferring probability comparisons between events from an initial set of comparisons arises in several contexts, ranging from decision theory to artificial intelligence to formal semantics. In this paper, we treat the problem as follows: beginning with a binary relation ≥ on events that does not preclude a probabilistic interpretation, in the sense that ≥ has extensions that are probabilistically representable, we characterize the extension ≥+ of ≥ that is exactly the intersection of all probabilistically representable extensions of (...) ≥. This extension ≥+ gives us all the additional comparisons that we are entitled to infer from ≥, based on the assumption that there is some probability measure of which ≥ gives us partial qualitative information. We pay special attention to the problem of extending an order on states to an order on events. In addition to the probabilistic interpretation, this problem has a more general interpretation involving measurement of any additive quantity: e.g., given comparisons between the weights of individual objects, what comparisons between the weights of groups of objects can we infer? (shrink)
While much of semantic theorizing is based on intuitions about logical phenomena associated with linguistic constructions—phenomena such as consistency and entailment—it is rare to see axiomatic treatments of linguistic fragments. Given a fragment interpreted in some class of formally specified models, it is often possible to ask for a characterization of the reasoning patterns validated by the class of models. Axiomatizations provide such a characterization, often in a perspicuous and efficient manner. In this paper, we highlight some of the benefits (...) of providing axiomatizations for the purpose of semantic theorizing. We illustrate some of these benefits using three examples from the study of modality. (shrink)
The provability logic of a theory $T$ is the set of modal formulas, which under any arithmetical realization are provable in $T$. We slightly modify this notion by requiring the arithmetical realizations to come from a specified set $\Gamma$. We make an analogous modification for interpretability logics. We first study provability logics with restricted realizations and show that for various natural candidates of $T$ and restriction set $\Gamma$, the result is the logic of linear frames. However, for the theory Primitive (...) Recursive Arithmetic (PRA), we define a fragment that gives rise to a more interesting provability logic by capitalizing on the well-studied relationship between PRA and I$\Sigma_1$. We then study interpretability logics, obtaining upper bounds for IL(PRA), whose characterization remains a major open question in interpretability logic. Again this upper bound is closely related to linear frames. The technique is also applied to yield the nontrivial result that IL(PRA) $\subset$ ILM. (shrink)
A probabilistic Chomsky–Schützenberger hierarchy of grammars is introduced and studied, with the aim of understanding the expressive power of generative models. We offer characterizations of the distributions definable at each level of the hierarchy, including probabilistic regular, context-free, (linear) indexed, context-sensitive, and unrestricted grammars, each corresponding to familiar probabilistic machine classes. Special attention is given to distributions on (unary notations for) positive integers. Unlike in the classical case where the "semi-linear" languages all collapse into the regular languages, using analytic tools (...) adapted from the classical setting we show there is no collapse in the probabilistic hierarchy: more distributions become definable at each level. We also address related issues such as closure under probabilistic conditioning. (shrink)
People often engage in “offline simulation”, considering what would happen if they performed certain actions in the future, or had performed different actions in the past. Prior research shows that these simulations are biased towards actions a person considers to be good—i.e., likely to pay off. We ask whether, and why, this bias might be adaptive. Through computational experiments we compare five agents who differ only in the way they engage in offline simulation, across a variety of different environment types. (...) Broadly speaking, our experiments reveal that simulating actions one already regards as good does in fact confer an advantage in downstream decision making, although this general pattern interacts with features of the environment in important ways. We contrast this bias with alternatives such as simulating actions whose outcomes are instead uncertain. (shrink)
We present a formal system for reasoning about inclusion and exclusion in natural language, following work by MacCartney and Manning. In particular, we show that an extension of the Monotonicity Calculus, augmented by six new type markings, is sufficient to derive novel inferences beyond monotonicity reasoning, and moreover gives rise to an interesting logic of its own. We prove soundness of the resulting calculus and discuss further logical and linguistic issues, including a new connection to the classes of weak, strong, (...) and superstrong negative polarity items. (shrink)
This LNCS book is part of the FOLLI book series and constitutes the proceedings of the 8th International Workshop on Logic, Rationality, and Interaction, LORI 2021, held in Xi`an, China, in October 2021. The 15 full papers presented together with 7 short papers in this book were carefully reviewed and selected from 40 submissions. The workshop covers a wide range on the following topics such as doxastic and epistemic logics, deontic logic, intuitionistic and subsstructural logics, voting theory, and causal inference.