6 found
Order:
See also
Jonathan Vandenburgh
Stanford University
  1.  42
    Conditional Learning Through Causal Models.Jonathan Vandenburgh - 2020 - Synthese (1-2):2415-2437.
    Conditional learning, where agents learn a conditional sentence ‘If A, then B,’ is difficult to incorporate into existing Bayesian models of learning. This is because conditional learning is not uniform: in some cases, learning a conditional requires decreasing the probability of the antecedent, while in other cases, the antecedent probability stays constant or increases. I argue that how one learns a conditional depends on the causal structure relating the antecedent and the consequent, leading to a causal model of conditional learning. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  2. Causal Models and the Logic of Counterfactuals.Jonathan Vandenburgh - manuscript
    Causal models show promise as a foundation for the semantics of counterfactual sentences. However, current approaches face limitations compared to the alternative similarity theory: they only apply to a limited subset of counterfactuals and the connection to counterfactual logic is not straightforward. This paper addresses these difficulties using exogenous interventions, where causal interventions change the values of exogenous variables rather than structural equations. This model accommodates judgments about backtracking counterfactuals, extends to logically complex counterfactuals, and validates familiar principles of counterfactual (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  3. Triviality Results, Conditional Probability, and Restrictor Conditionals.Jonathan Vandenburgh - manuscript
    Conditional probability is often used to represent the probability of the conditional. However, triviality results suggest that the thesis that the probability of the conditional always equals conditional probability leads to untenable conclusions. In this paper, I offer an interpretation of this thesis in a possible worlds framework, arguing that the triviality results make assumptions at odds with the use of conditional probability. I argue that these assumptions come from a theory called the operator theory and that the rival restrictor (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  4.  25
    Backtracking through interventions: An exogenous intervention model for counterfactual semantics.Jonathan Vandenburgh - 2022 - Mind and Language 38 (4):981-999.
    Causal models show promise as a foundation for the semantics of counterfactual sentences. However, current approaches face limitations compared to the alternative similarity theory: they only apply to a limited subset of counterfactuals and the connection to counterfactual logic is not straightforward. This article addresses these difficulties using exogenous interventions, where causal interventions change the values of exogenous variables rather than structural equations. This model accommodates judgments about backtracking counterfactuals, extends to logically complex counterfactuals, and validates familiar principles of counterfactual (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5. A Causal Safety Criterion for Knowledge.Jonathan Vandenburgh - forthcoming - Erkenntnis:1-21.
    Safety purports to explain why cases of accidentally true belief are not knowledge, addressing Gettier cases and cases of belief based on statistical evidence. However, problems arise for using safety as a condition on knowledge: safety is not necessary for knowledge and cannot always explain the Gettier cases and cases of statistical evidence it is meant to address. In this paper, I argue for a new modal condition designed to capture the non-accidental relationship between facts and evidence required for knowledge: (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  6. Learning as Hypothesis Testing: Learning Conditional and Probabilistic Information.Jonathan Vandenburgh - manuscript
    Complex constraints like conditionals ('If A, then B') and probabilistic constraints ('The probability that A is p') pose problems for Bayesian theories of learning. Since these propositions do not express constraints on outcomes, agents cannot simply conditionalize on the new information. Furthermore, a natural extension of conditionalization, relative information minimization, leads to many counterintuitive predictions, evidenced by the sundowners problem and the Judy Benjamin problem. Building on the notion of a `paradigm shift' and empirical research in psychology and economics, I (...)
    Direct download  
     
    Export citation  
     
    Bookmark