10 found
Order:
  1. Bayes Not Bust! Why Simplicity Is No Problem for Bayesians.David L. Dowe, Steve Gardner & and Graham Oppy - 2007 - British Journal for the Philosophy of Science 58 (4):709 - 754.
    The advent of formal definitions of the simplicity of a theory has important implications for model selection. But what is the best way to define simplicity? Forster and Sober ([1994]) advocate the use of Akaike's Information Criterion (AIC), a non-Bayesian formalisation of the notion of simplicity. This forms an important part of their wider attack on Bayesianism in the philosophy of science. We defend a Bayesian alternative: the simplicity of a theory is to be characterised in terms of Wallace's Minimum (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  2.  16
    Measuring universal intelligence: Towards an anytime intelligence test.José Hernández-Orallo & David L. Dowe - 2010 - Artificial Intelligence 174 (18):1508-1539.
  3.  14
    Computer models solving intelligence test problems: Progress and implications.José Hernández-Orallo, Fernando Martínez-Plumed, Ute Schmid, Michael Siebers & David L. Dowe - 2016 - Artificial Intelligence 230 (C):74-107.
  4.  36
    Empirical data sets are algorithmically compressible: Reply to McAllister.Charles Twardy, Steve Gardner & David L. Dowe - 2005 - Studies in the History and Philosophy of Science, Part A 36 (2):391-402.
    James McAllister’s 2003 article, “Algorithmic randomness in empirical data” claims that empirical data sets are algorithmically random, and hence incompressible. We show that this claim is mistaken. We present theoretical arguments and empirical evidence for compressibility, and discuss the matter in the framework of Minimum Message Length (MML) inference, which shows that the theory which best compresses the data is the one with highest posterior probability, and the best explanation of the data.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  5. On Potential Cognitive Abilities in the Machine Kingdom.José Hernández-Orallo & David L. Dowe - 2013 - Minds and Machines 23 (2):179-210.
    Animals, including humans, are usually judged on what they could become, rather than what they are. Many physical and cognitive abilities in the ‘animal kingdom’ are only acquired (to a given degree) when the subject reaches a certain stage of development, which can be accelerated or spoilt depending on how the environment, training or education is. The term ‘potential ability’ usually refers to how quick and likely the process of attaining the ability is. In principle, things should not be different (...)
    Direct download (15 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  6.  24
    Simulating exploration versus exploitation in agent foraging under different environment uncertainties.Nader Chmait, David L. Dowe, David G. Green & Yuan-Fang Li - 2019 - Behavioral and Brain Sciences 42.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7.  13
    Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence: Papers From the Ray Solomonoff 85th Memorial Conference, Melbourne, Vic, Australia, November 30 -- December 2, 2011.David L. Dowe (ed.) - 2013 - Springer.
    Algorithmic probability and friends: Proceedings of the Ray Solomonoff 85th memorial conference is a collection of original work and surveys. The Solomonoff 85th memorial conference was held at Monash University's Clayton campus in Melbourne, Australia as a tribute to pioneer, Ray Solomonoff, honouring his various pioneering works - most particularly, his revolutionary insight in the early 1960s that the universality of Universal Turing Machines could be used for universal Bayesian prediction and artificial intelligence. This work continues to increasingly influence and (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  8.  46
    Minimum message length and statistically consistent invariant (objective?) Bayesian probabilistic inference—from (medical) “evidence”.David L. Dowe - 2008 - Social Epistemology 22 (4):433 – 460.
    “Evidence” in the form of data collected and analysis thereof is fundamental to medicine, health and science. In this paper, we discuss the “evidence-based” aspect of evidence-based medicine in terms of statistical inference, acknowledging that this latter field of statistical inference often also goes by various near-synonymous names—such as inductive inference (amongst philosophers), econometrics (amongst economists), machine learning (amongst computer scientists) and, in more recent times, data mining (in some circles). Three central issues to this discussion of “evidence-based” are (i) (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9.  22
    Kinship, optimality, and typology.Simon Musgrave & David L. Dowe - 2010 - Behavioral and Brain Sciences 33 (5):397-398.
    Jones uses a mechanism from the linguistic theory, Optimality Theory, to generate the range of kin systems observed in human cultures and human languages. The observed distribution of kinship systems across human societies suggests that some possibilities are preferred over others, a result that would indicate Jones' model needs to be refined, especially in its treatment of markedness.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  10. Information, Statistics, and Induction in Science Proceedings of the Conference, Isis '96, Melbourne, Australia, 20-23 August 1996'.David L. Dowe, Kevin B. Korb & Jonathan J. Oliver - 1996