Determining Maximal Entropy Functions for Objective Bayesian Inductive Logic

Journal of Philosophical Logic 52 (2):555-608 (2022)
  Copy   BIBTEX

Abstract

According to the objective Bayesian approach to inductive logic, premisses inductively entail a conclusion just when every probability function with maximal entropy, from all those that satisfy the premisses, satisfies the conclusion. When premisses and conclusion are constraints on probabilities of sentences of a first-order predicate language, however, it is by no means obvious how to determine these maximal entropy functions. This paper makes progress on the problem in the following ways. Firstly, we introduce the concept of a limit in entropy and show that, if the set of probability functions satisfying the premisses contains a limit in entropy, then this limit point is unique and is the maximal entropy probability function. Next, we turn to the special case in which the premisses are categorical sentences of the logical language. We show that if the uniform probability function gives the premisses positive probability, then the maximal entropy function can be found by simply conditionalising this uniform prior on the premisses. We generalise our results to demonstrate agreement between the maximal entropy approach and Jeffrey conditionalisation in the case in which there is a single premiss that specifies the probability of a sentence of the language. We show that, after learning such a premiss, certain inferences are preserved, namely inferences to inductive tautologies. Finally, we consider potential pathologies of the approach: we explore the extent to which the maximal entropy approach is invariant under permutations of the constants of the language, and we discuss some cases in which there is no maximal entropy probability function.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,532

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Inductive influence.Jon Williamson - 2007 - British Journal for the Philosophy of Science 58 (4):689 - 708.
Bayesianism and language change.Jon Williamson - 2003 - Journal of Logic, Language and Information 12 (1):53-97.
The Maximal Closed Classes of Unary Functions in p‐Valued Logic.Liu Renren & Lo Czukai - 1996 - Mathematical Logic Quarterly 42 (1):234-240.
Bayesian model learning based on predictive entropy.Jukka Corander & Pekka Marttinen - 2006 - Journal of Logic, Language and Information 15 (1-2):5-20.

Analytics

Added to PP
2022-10-13

Downloads
28 (#565,245)

6 months
19 (#133,545)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Soroush Rafiee Rad
Tilburg University
Jürgen Landes
Università degli Studi di Milano
Jon Williamson
University of Kent

Citations of this work

Formal Epistemology Meets Mechanism Design.Jürgen Landes - 2023 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 54 (2):215-231.

Add more citations

References found in this work

A Mathematical Theory of Communication.Claude Elwood Shannon - 1948 - Bell System Technical Journal 27 (April 1924):379–423.
The Continuum of Inductive Methods.Rudolf Carnap - 1953 - Philosophy 28 (106):272-273.
Probabilistic Logics and Probabilistic Networks.Rolf Haenni, Jan-Willem Romeijn, Gregory Wheeler & Jon Williamson - 2010 - Dordrecht, Netherland: Synthese Library. Edited by Gregory Wheeler, Rolf Haenni, Jan-Willem Romeijn & and Jon Williamson.

View all 23 references / Add more references