Switch to: References

Citations of:

The Reference Class

Philosophy of Science 50 (3):374-397 (1983)

Add citations

You must login to add citations.
  1. On Uncertainty.Brian Weatherson - 1998 - Dissertation, Monash University
    This dissertation looks at a set of interconnected questions concerning the foundations of probability, and gives a series of interconnected answers. At its core is a piece of old-fashioned philosophical analysis, working out what probability is. Or equivalently, investigating the semantic question of what is the meaning of ‘probability’? Like Keynes and Carnap, I say that probability is degree of reasonable belief. This immediately raises an epistemological question, which degrees count as reasonable? To solve that in its full generality would (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Uncertainty, Rationality, and Agency.Wiebe van der Hoek - 2006 - Dordrecht, Netherland: Springer.
    This volume concerns Rational Agents - humans, players in a game, software or institutions - which must decide the proper next action in an atmosphere of partial information and uncertainty. The book collects formal accounts of Uncertainty, Rationality and Agency, and also of their interaction. It will benefit researchers in artificial systems which must gather information, reason about it and then make a rational decision on which action to take.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Kyburg, Levi, and Petersen.Mark Stone - 1987 - Philosophy of Science 54 (2):244-255.
    In this paper I attempt to tie together a longstanding dispute between Henry Kyburg and Isaac Levi concerning statistical inferences. The debate, which centers around the example of Petersen the Swede, concerns Kyburg's and Levi's accounts of randomness and choosing reference classes. I argue that both Kyburg and Levi have missed the real significance of their dispute, that Levi's claim that Kyburg violates Confirmational Conditionalization is insufficient, and that Kyburg has failed to show that Levi's criteria for choosing reference class (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Default reasoning in semantic networks: A formalization of recognition and inheritance.Lokendra Shastri - 1989 - Artificial Intelligence 39 (3):283-355.
  • A Connectionist Approach to Knowledge Representation and Limited Inference.Lokendra Shastri - 1988 - Cognitive Science 12 (3):331-392.
    Although the connectionist approach has lead to elegant solutions to a number of problems in cognitive science and artificial intelligence, its suitability for dealing with problems in knowledge representation and inference has often been questioned. This paper partly answers this criticism by demonstrating that effective solutions to certain problems in knowledge representation and limited inference can be found by adopting a connectionist approach. The paper presents a connectionist realization of semantic networks, that is, it describes how knowledge about concepts, their (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   65 citations  
  • Model-preference default theories.Bart Selman & Henry A. Kautz - 1990 - Artificial Intelligence 45 (3):287-322.
  • The theory of nomic probability.John L. Pollock - 1992 - Synthese 90 (2):263 - 299.
    This article sketches a theory of objective probability focusing on nomic probability, which is supposed to be the kind of probability figuring in statistical laws of nature. The theory is based upon a strengthened probability calculus and some epistemological principles that formulate a precise version of the statistical syllogism. It is shown that from this rather minimal basis it is possible to derive theorems comprising (1) a theory of direct inference, and (2) a theory of induction. The theory of induction (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Oscar.John L. Pollock - 1996 - Journal of Applied Non-Classical Logics 6 (1):89-113.
    In its present incarnation, OSCAR is a fully implemented programmable architecture for a rational agent. If we just focus upon the epistemic reasoning in OSCAR, we have a powerful general-purpose defeasible reasoner. The purpose of this paper is to describe that reasoner. OSCAR's defeasible reasoner is based upon seven fundamental ideas. These are (1) an argument-based account of defeasible reasoning, (2) an analysis of defeat-status given a set of interrelated arguments, (3) a general adequacy criterion for automated defeasible reasoners, called (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Oscar.John L. Pollock - 1996 - Journal of Applied Non-Classical Logics 6 (1):89-113.
    OSCAR is a fully implemented architecture for a cognitive agent, based largely on the author’s work in philosophy concerning epistemology and practical cognition. The seminal idea is that a generally intelligent agent must be able to function in an environment in which it is ignorant of most matters of fact. The architecture incorporates a general-purpose defeasible reasoner, built on top of an efficient natural deduction reasoner for first-order logic. It is based upon a detailed theory about how the various aspects (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • How to reason defeasibly.John L. Pollock - 1992 - Artificial Intelligence 57 (1):1-42.
  • Notes on “a clash of intuitions”.Eric Neufeld - 1991 - Artificial Intelligence 48 (2):225-240.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Imitation Game: Threshold or Watershed?Eric Neufeld & Sonje Finnestad - 2020 - Minds and Machines 30 (4):637-657.
    Showing remarkable insight into the relationship between language and thought, Alan Turing in 1950 proposed the Imitation Game as a proxy for the question “Can machines think?” and its meaning and practicality have been debated hotly ever since. The Imitation Game has come under criticism within the Computer Science and Artificial Intelligence communities with leading scientists proposing alternatives, revisions, or even that the Game be abandoned entirely. Yet Turing’s imagined conversational fragments between human and machine are rich with complex instances (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Weak nonmonotonic probabilistic logics.Thomas Lukasiewicz - 2005 - Artificial Intelligence 168 (1-2):119-161.
  • Decisions with indeterminate probabilities.Ronald P. Loui - 1986 - Theory and Decision 21 (3):283-309.
  • The evidence of your own eyes.Henry E. Kyburg - 1993 - Minds and Machines 3 (2):201-218.
    The evidence of your own eyes has often been regarded as unproblematic. But we know that people make mistaken observations. This can be looked on as unimportant if there issome class of statements that can serve as evidence for others, or if every statement in our corpus of knowledge is allowed to be no more than probable. Neither of these alternatives is plausible when it comes to machine or robotic observation. Then we must take the possibility of error seriously, and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • A two-level system of knowledge representation based on evidential probability.Henry E. Kyburg - 1991 - Philosophical Studies 64 (1):105 - 114.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Combining probabilistic logic programming with the power of maximum entropy.Gabriele Kern-Isberner & Thomas Lukasiewicz - 2004 - Artificial Intelligence 157 (1-2):139-202.
  • Causal Probability.John L. John L. - 2002 - Synthese 132 (1/2):143-185.
    Examples growing out of the Newcomb problem have convinced many people that decision theory should proceed in terms of some kind of causal probability. I endorse this view and define and investigate a variety of causal probability. My definition is related to Skyrms' definition, but proceeds in terms of objective probabilities rather than subjective probabilities and avoids taking causal dependence as a primitive concept.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Logic For Inductive Probabilistic Reasoning.Manfred Jaeger - 2005 - Synthese 144 (2):181-248.
    Inductive probabilistic reasoning is understood as the application of inference patterns that use statistical background information to assign (subjective) probabilities to single events. The simplest such inference pattern is direct inference: from “70% of As are Bs” and “a is an A” infer that a is a B with probability 0.7. Direct inference is generalized by Jeffrey’s rule and the principle of cross-entropy minimization. To adequately formalize inductive probabilistic reasoning is an interesting topic for artificial intelligence, as an autonomous system (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Nonmonotonicity and the scope of reasoning.David W. Etherington, Sarit Kraus & Donald Perlis - 1991 - Artificial Intelligence 52 (3):221-261.
  • Unifying default reasoning and belief revision in a modal framework.Craig Boutilier - 1994 - Artificial Intelligence 68 (1):33-85.
  • From statistical knowledge bases to degrees of belief.Fahiem Bacchus, Adam J. Grove, Joseph Y. Halpern & Daphne Koller - 1996 - Artificial Intelligence 87 (1-2):75-143.