Switch to: References

Add citations

You must login to add citations.
  1. Probability and Randomness.Antony Eagle - 2016 - In Alan Hájek & Christopher Hitchcock (eds.), The Oxford Handbook of Probability and Philosophy. Oxford, U.K.: Oxford University Press. pp. 440-459.
    Early work on the frequency theory of probability made extensive use of the notion of randomness, conceived of as a property possessed by disorderly collections of outcomes. Growing out of this work, a rich mathematical literature on algorithmic randomness and Kolmogorov complexity developed through the twentieth century, but largely lost contact with the philosophical literature on physical probability. The present chapter begins with a clarification of the notions of randomness and probability, conceiving of the former as a property of a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • An Evolutionary Argument for a Self-Explanatory, Benevolent Metaphysics.Ward Blondé - 2015 - Symposion: Theoretical and Applied Inquiries in Philosophy and Social Sciences 2 (2):143-166.
    In this paper, a metaphysics is proposed that includes everything that can be represented by a well-founded multiset. It is shown that this metaphysics, apart from being self-explanatory, is also benevolent. Paradoxically, it turns out that the probability that we were born in another life than our own is zero. More insights are gained by inducing properties from a metaphysics that is not self-explanatory. In particular, digital metaphysics is analyzed, which claims that only computable things exist. First of all, it (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Philosophy of Science and Information.Ioannis Votsis - 2016 - In Luciano Floridi (ed.), The Routledge Handbook of Philosophy of Information. Routledge.
    Of all the sub-disciplines of philosophy, the philosophy of science has perhaps the most privileged relationship to information theory. This relationship has been forged through a common interest in themes like induction, probability, confirmation, simplicity, non-ad hocness, unification and, more generally, ontology. It also has historical roots. One of the founders of algorithmic information theory, Ray Solomonoff, produced his seminal work on inductive inference as a direct result of grappling with problems first encountered as a student of the influential philosopher (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Structure of Epistemic Probabilities.Nevin Climenhaga - 2020 - Philosophical Studies 177 (11):3213-3242.
    The epistemic probability of A given B is the degree to which B evidentially supports A, or makes A plausible. This paper is a first step in answering the question of what determines the values of epistemic probabilities. I break this question into two parts: the structural question and the substantive question. Just as an object’s weight is determined by its mass and gravitational acceleration, some probabilities are determined by other, more basic ones. The structural question asks what probabilities are (...)
    Direct download (5 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   11 citations  
  • Replacing Causal Faithfulness with Algorithmic Independence of Conditionals.Jan Lemeire & Dominik Janzing - 2013 - Minds and Machines 23 (2):227-249.
    Independence of Conditionals (IC) has recently been proposed as a basic rule for causal structure learning. If a Bayesian network represents the causal structure, its Conditional Probability Distributions (CPDs) should be algorithmically independent. In this paper we compare IC with causal faithfulness (FF), stating that only those conditional independences that are implied by the causal Markov condition hold true. The latter is a basic postulate in common approaches to causal structure learning. The common spirit of FF and IC is to (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Putnam’s Diagonal Argument and the Impossibility of a Universal Learning Machine.Tom Sterkenburg - 2019 - Erkenntnis 84 (3):633-656.
    Putnam construed the aim of Carnap’s program of inductive logic as the specification of a “universal learning machine,” and presented a diagonal proof against the very possibility of such a thing. Yet the ideas of Solomonoff and Levin lead to a mathematical foundation of precisely those aspects of Carnap’s program that Putnam took issue with, and in particular, resurrect the notion of a universal mechanical rule for induction. In this paper, I take up the question whether the Solomonoff–Levin proposal is (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Does Science Presuppose Naturalism ?Yonatan I. Fishman & Maarten Boudry - 2013 - Science & Education 22 (5):921-949.
  • The Dualist’s Dilemma: The High Cost of Reconciling Neuroscience with a Soul.Keith Augustine & Yonatan I. Fishman - 2015 - In Keith Augustine & Michael Martin (eds.), The Myth of an Afterlife: The Case against Life After Death. Rowman & Littlefield. pp. 203-292.
    Tight correlations between mental states and brain states have been observed time and again within the ethology of biologically ingrained animal behaviors, the comparative psychology of animal minds, the evolutionary psychology of mental adaptations, the behavioral genetics of inherited mental traits, the developmental psychology of the maturing mind, the psychopharmacology of mind-altering substances, and cognitive neuroscience more generally. They imply that our mental lives are only made possible because of brain activity—that having a functioning brain is a necessary condition for (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Information.Pieter Adriaans - 2012 - Stanford Encyclopedia of Philosophy.
  • Probabilities on Sentences in an Expressive Logic.Marcus Hutter, John W. Lloyd, Kee Siong Ng & William T. B. Uther - 2013 - Journal of Applied Logic 11 (4):386-420.
    Automated reasoning about uncertain knowledge has many applications. One difficulty when developing such systems is the lack of a completely satisfactory integration of logic and probability. We address this problem directly. Expressive languages like higher-order logic are ideally suited for representing and reasoning about structured knowledge. Uncertain knowledge can be modeled by using graded probabilities rather than binary truth-values. The main technical problem studied in this paper is the following: Given a set of sentences, each having some probability of being (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation