About this topic
Summary Algorithmic complexity, also known as algorithmic information theory or Kolmogorov complexity, refers to the branch of theoretical computer science that originates in the idea to define the complexity or information content of an object in terms of its shortest code length via a universal Turing machine. This idea forms the basis for a theory of algorithmic randomness as well as a theory of inductive inference. (The area should not be confused with computational complexity theory, which studies the computational efficiency of algorithms.)
Introductions The standard textbook on Kolmogorov complexity is Li & Vitányi 2008. More specialized textbooks on algorithmic randomness are Nies 2009 and Downey & Hirschfeldt 2010. A chapter-length introduction for philosophers is Dasgupta 2011.
Related

Contents
48 found
Order:
  1. Algorithmic Randomness and Probabilistic Laws.Jeffrey A. Barrett & Eddy Keming Chen - manuscript
    We consider two ways one might use algorithmic randomness to characterize a probabilistic law. The first is a generative chance* law. Such laws involve a nonstandard notion of chance. The second is a probabilistic* constraining law. Such laws impose relative frequency and randomness constraints that every physically possible world must satisfy. While each notion has virtues, we argue that the latter has advantages over the former. It supports a unified governing account of non-Humean laws and provides independently motivated solutions to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. The Substrate-Prior of Consciousness.Gabriel Leuenberger -
    Given functionally equivalent minds, how does the expected quantity of their conscious experience differ across different substrates and how could we calculate this? We argue that a realistic digital brain emulation would be orders of magnitude less conscious than a real biological brain. On the other hand, a mind running on neuromorphic hardware or a quantum computer could in principle be more conscious than than a biological brain.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. Counterfactual Analysis by Algorithmic Complexity: A Metric Between Possible Worlds.Nicholas Corrêa & Nythamar Fernandes de Oliveira - forthcoming - Manuscrito 45 (4):1-35.
    Counterfactuals have become an important area of interdisciplinary interest, especially in logic, philosophy of language, epistemology, metaphysics, psychology, decision theory, and even artificial intelligence. In this study, we propose a new form of analysis for counterfactuals: analysis by algorithmic complexity. Inspired by Lewis-Stalnaker's Possible Worlds Semantics, the proposed method allows for a new interpretation of the debate between David Lewis and Robert Stalnaker regarding the Limit and Singularity assumptions. Besides other results, we offer a new way to answer the problems (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  4. A note on the learning-theoretic characterizations of randomness and convergence.Tomasz Steifer - forthcoming - Review of Symbolic Logic:1-15.
    Recently, a connection has been established between two branches of computability theory, namely between algorithmic randomness and algorithmic learning theory. Learning-theoretical characterizations of several notions of randomness were discovered. We study such characterizations based on the asymptotic density of positive answers. In particular, this note provides a new learning-theoretic definition of weak 2-randomness, solving the problem posed by (Zaffora Blando, Rev. Symb. Log. 2019). The note also highlights the close connection between these characterizations and the problem of convergence on random (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5. A Dilemma for Solomonoff Prediction.Sven Neth - 2023 - Philosophy of Science 90 (2):288-306.
    The framework of Solomonoff prediction assigns prior probability to hypotheses inversely proportional to their Kolmogorov complexity. There are two well-known problems. First, the Solomonoff prior is relative to a choice of Universal Turing machine. Second, the Solomonoff prior is not computable. However, there are responses to both problems. Different Solomonoff priors converge with more and more data. Further, there are computable approximations to the Solomonoff prior. I argue that there is a tension between these two responses. This is because computable (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  6. Multiple patterns, multiple explanations.Steve Petersen - 2023 - In Jonah N. Schupbach & David H. Glass (eds.), Conjunctive Explanations: The Nature, Epistemology, and Psychology of Explanatory Multiplicity. New York, NY: Routledge. pp. 38-48.
    A "patternist" approach to explanation seeks to formalize unificationism using notions from algorithmic information theory. Among other advantages, this account provides both a rigorous sense of how data can admit multiple explanations, and a rigorous sense of how some of those explanations can conjoin, while others compete.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. The Equivalence of Definitions of Algorithmic Randomness.Christopher Porter - 2021 - Philosophia Mathematica 29 (2):153–194.
    In this paper, I evaluate the claim that the equivalence of multiple intensionally distinct definitions of random sequence provides evidence for the claim that these definitions capture the intuitive conception of randomness, concluding that the former claim is false. I then develop an alternative account of the significance of randomness-theoretic equivalence results, arguing that they are instances of a phenomenon I refer to as schematic equivalence. On my account, this alternative approach has the virtue of providing the plurality of definitions (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8. Real patterns and indispensability.Abel Suñé & Manolo Martínez - 2021 - Synthese 198 (5):4315-4330.
    While scientific inquiry crucially relies on the extraction of patterns from data, we still have a far from perfect understanding of the metaphysics of patterns—and, in particular, of what makes a pattern real. In this paper we derive a criterion of real-patternhood from the notion of conditional Kolmogorov complexity. The resulting account belongs to the philosophical tradition, initiated by Dennett :27–51, 1991), that links real-patternhood to data compressibility, but is simpler and formally more perspicuous than other proposals previously defended in (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  9. Strengthening Weak Emergence.Nora Berenstain - 2020 - Erkenntnis 87 (5):2457-2474.
    Bedau's influential (1997) account analyzes weak emergence in terms of the non-derivability of a system’s macrostates from its microstates except by simulation. I offer an improved version of Bedau’s account of weak emergence in light of insights from information theory. Non-derivability alone does not guarantee that a system’s macrostates are weakly emergent. Rather, it is non-derivability plus the algorithmic compressibility of the system’s macrostates that makes them weakly emergent. I argue that the resulting information-theoretic picture provides a metaphysical account of (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  10. Effective Complexity: In Which Sense is It Informative?Esteban Céspedes & Miguel Fuentes - 2020 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 51 (3):359-374.
    This work responds to a criticism of effective complexity made by James McAllister, according to which such a notion is not an appropriate measure for information content. Roughly, effective complexity is focused on the regularities of the data rather than on the whole data, as opposed to algorithmic complexity. McAllister’s argument shows that, because the set of relevant regularities for a given object is not unique, one cannot assign unique values of effective complexity to considered expressions and, therefore, that algorithmic (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11. Law without law: From observer states to physics via algorithmic information theory.Markus P. Müller - 2020 - Quantum 4:301.
    According to our current conception of physics, any valid physical theory is supposed to describe the objective evolution of a unique external world. However, this condition is challenged by quantum theory, which suggests that physical systems should not always be understood as having objective properties which are simply revealed by measurement. Furthermore, as argued below, several other conceptual puzzles in the foundations of physics and related fields point to limitations of our current perspective and motivate the exploration of an alternative: (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  12. Intuition, intelligence, data compression.Jens Kipper - 2019 - Synthese 198 (Suppl 27):6469-6489.
    The main goal of my paper is to argue that data compression is a necessary condition for intelligence. One key motivation for this proposal stems from a paradox about intuition and intelligence. For the purposes of this paper, it will be useful to consider playing board games—such as chess and Go—as a paradigm of problem solving and cognition, and computer programs as a model of human cognition. I first describe the basic components of computer programs that play board games, namely (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  13. A Simplicity Criterion for Physical Computation.Tyler Millhouse - 2019 - British Journal for the Philosophy of Science 70 (1):153-178.
    The aim of this paper is to offer a formal criterion for physical computation that allows us to objectively distinguish between competing computational interpretations of a physical system. The criterion construes a computational interpretation as an ordered pair of functions mapping (1) states of a physical system to states of an abstract machine, and (2) inputs to this machine to interventions in this physical system. This interpretation must ensure that counterfactuals true of the abstract machine have appropriate counterparts which are (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  14. Composition as pattern.Steve Petersen - 2019 - Philosophical Studies 176 (5):1119-1139.
    I argue for patternism, a new answer to the question of when some objects compose a whole. None of the standard principles of composition comfortably capture our natural judgments, such as that my cat exists and my table exists, but there is nothing wholly composed of them. Patternism holds, very roughly, that some things compose a whole whenever together they form a “real pattern”. Plausibly we are inclined to acknowledge the existence of my cat and my table but not of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  15. Putnam’s Diagonal Argument and the Impossibility of a Universal Learning Machine.Tom F. Sterkenburg - 2019 - Erkenntnis 84 (3):633-656.
    Putnam construed the aim of Carnap’s program of inductive logic as the specification of a “universal learning machine,” and presented a diagonal proof against the very possibility of such a thing. Yet the ideas of Solomonoff and Levin lead to a mathematical foundation of precisely those aspects of Carnap’s program that Putnam took issue with, and in particular, resurrect the notion of a universal mechanical rule for induction. In this paper, I take up the question whether the Solomonoff–Levin proposal is (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  16. Universal Prediction: A Philosophical Investigation.Tom F. Sterkenburg - 2018 - Dissertation, University of Groningen
    In this thesis I investigate the theoretical possibility of a universal method of prediction. A prediction method is universal if it is always able to learn from data: if it is always able to extrapolate given data about past observations to maximally successful predictions about future observations. The context of this investigation is the broader philosophical question into the possibility of a formal specification of inductive or scientific reasoning, a question that also relates to modern-day speculation about a fully automatized (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  17. A generalized characterization of algorithmic probability.Tom F. Sterkenburg - 2017 - Theory of Computing Systems 61 (4):1337-1352.
    An a priori semimeasure (also known as “algorithmic probability” or “the Solomonoff prior” in the context of inductive inference) is defined as the transformation, by a given universal monotone Turing machine, of the uniform measure on the infinite strings. It is shown in this paper that the class of a priori semimeasures can equivalently be defined as the class of transformations, by all compatible universal monotone Turing machines, of any continuous computable measure in place of the uniform measure. Some consideration (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  18. Probability and Randomness.Antony Eagle - 2016 - In Alan Hájek & Christopher Hitchcock (eds.), The Oxford Handbook of Probability and Philosophy. Oxford: Oxford University Press. pp. 440-459.
    Early work on the frequency theory of probability made extensive use of the notion of randomness, conceived of as a property possessed by disorderly collections of outcomes. Growing out of this work, a rich mathematical literature on algorithmic randomness and Kolmogorov complexity developed through the twentieth century, but largely lost contact with the philosophical literature on physical probability. The present chapter begins with a clarification of the notions of randomness and probability, conceiving of the former as a property of a (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  19. On analogues of the church–turing thesis in algorithmic randomness.Christopher P. Porter - 2016 - Review of Symbolic Logic 9 (3):456-479.
  20. Solomonoff Prediction and Occam’s Razor.Tom F. Sterkenburg - 2016 - Philosophy of Science 83 (4):459-479.
    Algorithmic information theory gives an idealized notion of compressibility that is often presented as an objective measure of simplicity. It is suggested at times that Solomonoff prediction, or algorithmic information theory in a predictive setting, can deliver an argument to justify Occam’s razor. This article explicates the relevant argument and, by converting it into a Bayesian framework, reveals why it has no such justificatory force. The supposed simplicity concept is better perceived as a specific inductive assumption, the assumption of effectiveness. (...)
    Remove from this list   Direct download (12 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  21. Philosophy of Science and Information.Ioannis Votsis - 2016 - In Luciano Floridi (ed.), The Routledge Handbook of Philosophy of Information. Routledge.
    Of all the sub-disciplines of philosophy, the philosophy of science has perhaps the most privileged relationship to information theory. This relationship has been forged through a common interest in themes like induction, probability, confirmation, simplicity, non-ad hocness, unification and, more generally, ontology. It also has historical roots. One of the founders of algorithmic information theory, Ray Solomonoff, produced his seminal work on inductive inference as a direct result of grappling with problems first encountered as a student of the influential philosopher (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22. God is Random: A Novel Argument for the Existence of God.Serkan Zorba - 2016 - European Journal of Science and Theology 12 (1):51-67.
    Applying the concepts of Kolmogorov-Chaitin complexity and Turing’s uncomputability from the computability and algorithmic information theories to the irreducible and incomputable randomness of quantum mechanics, a novel argument for the existence of God is presented. Concepts of ‘transintelligence’ and ‘transcausality’ are introduced, and from them, it is posited that our universe must be epistemologically and ontologically an open universe. The proposed idea also proffers a new perspective on the nonlocal nature and the infamous wave-function-collapse problem of quantum mechanics.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23. An algorithmic information theory challenge to intelligent design.Sean Devine - 2014 - Zygon 49 (1):42-65.
    William Dembski claims to have established a decision process to determine when highly unlikely events observed in the natural world are due to Intelligent Design. This article argues that, as no implementable randomness test is superior to a universal Martin-Löf test, this test should be used to replace Dembski's decision process. Furthermore, Dembski's decision process is flawed, as natural explanations are eliminated before chance. Dembski also introduces a fourth law of thermodynamics, his “law of conservation of information,” to argue that (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  24. The Semimeasure Property of Algorithmic Probability -- “Feature‘ or “Bug‘?Douglas Campbell - 2013 - In David L. Dowe (ed.), Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence: Papers From the Ray Solomonoff 85th Memorial Conference, Melbourne, Vic, Australia, November 30 -- December 2, 2011. Springer. pp. 79--90.
    An unknown process is generating a sequence of symbols, drawn from an alphabet, A. Given an initial segment of the sequence, how can one predict the next symbol? Ray Solomonoff’s theory of inductive reasoning rests on the idea that a useful estimate of a sequence’s true probability of being outputted by the unknown process is provided by its algorithmic probability (its probability of being outputted by a species of probabilistic Turing machine). However algorithmic probability is a “semimeasure”: i.e., the sum, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25. What is a complex system?James Ladyman, James Lambert & Karoline Wiesner - 2013 - European Journal for Philosophy of Science 3 (1):33-67.
    Complex systems research is becoming ever more important in both the natural and social sciences. It is commonly implied that there is such a thing as a complex system, different examples of which are studied across many disciplines. However, there is no concise definition of a complex system, let alone a definition on which all scientists agree. We review various attempts to characterize a complex system, and consider a core set of features that are widely associated with complex systems in (...)
    Remove from this list   Direct download (17 more)  
     
    Export citation  
     
    Bookmark   47 citations  
  26. Toward an algorithmic metaphysics.Steve Petersen - 2013 - In David L. Dowe (ed.), Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence: Papers From the Ray Solomonoff 85th Memorial Conference, Melbourne, Vic, Australia, November 30 -- December 2, 2011. Springer. pp. 306-317.
    There are writers in both metaphysics and algorithmic information theory (AIT) who seem to think that the latter could provide a formal theory of the former. This paper is intended as a step in that direction. It demonstrates how AIT might be used to define basic metaphysical notions such as *object* and *property* for a simple, idealized world. The extent to which these definitions capture intuitions about the metaphysics of the simple world, times the extent to which we think the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  27. Mathematical foundations of randomness.Abhijit Dasgupta - 2011 - In Prasanta S. Bandyopadhyay & Malcolm Forster (eds.), Handbook of the Philosophy of Science, Vol. 7: Philosophy of Statistics. Elsevier. pp. 641-710.
    This chapter introduces the definitions of the notion of a random sequence using the three main ideas described that have dominated algorithmic randomness. There are three key approaches for defining randomness in sequences: unpredictability, typicality, and incompressibility. A notion is set up notation for strings and sequences, the Cantor Space, which forms the basic framework for carrying out further discussions, and contains a brief and informal introduction to Lebesgue measure on the unit interval and the Cantor space, which one treats (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark   5 citations  
  28. A Philosophical Treatise of Universal Induction.Samuel Rathmanner & Marcus Hutter - 2011 - Entropy 13 (6):1076-1136.
    Understanding inductive reasoning is a problem that has engaged mankind for thousands of years. This problem is relevant to a wide range of fields and is integral to the philosophy of science. It has been tackled by many great minds ranging from philosophers to scientists to mathematicians, and more recently computer scientists. In this article we argue the case for Solomonoff Induction, a formal inductive framework which combines algorithmic information theory with the Bayesian framework. Although it achieves excellent theoretical results (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   11 citations  
  29. Randomness Through Computation: Some Answers, More Questions.Hector Zenil (ed.) - 2011 - World Scientific.
    The book is intended to explain the larger and intuitive concept of randomness by means of computation, particularly through algorithmic complexity and recursion theory. It also includes the transcriptions (by A. German) of two panel discussion on the topics: Is The Universe Random?, held at the University of Vermont in 2007; and What is Computation? (How) Does Nature Compute?, held at the University of Indiana Bloomington in 2008. The book is intended to the general public, undergraduate and graduate students in (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Chance versus Randomness.Antony Eagle - 2010 - Stanford Encyclopedia of Philosophy.
    This article explores the connection between objective chance and the randomness of a sequence of outcomes. Discussion is focussed around the claim that something happens by chance iff it is random. This claim is subject to many objections. Attempts to save it by providing alternative theories of chance and randomness, involving indeterminism, unpredictability, and reductionism about chance, are canvassed. The article is largely expository, with particular attention being paid to the details of algorithmic randomness, a topic relatively unfamiliar to philosophers.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  31. Universal intelligence: A definition of machine intelligence.Shane Legg & Marcus Hutter - 2007 - Minds and Machines 17 (4):391-444.
    A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: we take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. (...)
    Remove from this list   Direct download (10 more)  
     
    Export citation  
     
    Bookmark   55 citations  
  32. Model selection and the multiplicity of patterns in empirical data.James W. McAllister - 2007 - Philosophy of Science 74 (5):884-894.
    Several quantitative techniques for choosing among data models are available. Among these are techniques based on algorithmic information theory, minimum description length theory, and the Akaike information criterion. All these techniques are designed to identify a single model of a data set as being the closest to the truth. I argue, using examples, that many data sets in science show multiple patterns, providing evidence for multiple phenomena. For any such data set, there is more than one data model that must (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  33. The application of algorithmic information theory to noisy patterned strings.Sean Devine - 2006 - Complexity 12 (2):52-58.
    Although algorithmic information theory provides a measure of the information content of string of characters, problems of noise and noncomputability emerge. However, if pattern in a noisy string is recognized by reference to a set of similar strings, this article shows that a compressed algorithmic description of a noisy string is possible and illustrates this with some simple examples. The article also shows that algorithmic information theory can quantify the information in complex organized systems where pattern is nested within pattern.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  34. Randomness Is Unpredictability.Antony Eagle - 2005 - British Journal for the Philosophy of Science 56 (4):749-790.
    The concept of randomness has been unjustly neglected in recent philosophical literature, and when philosophers have thought about it, they have usually acquiesced in views about the concept that are fundamentally flawed. After indicating the ways in which these accounts are flawed, I propose that randomness is to be understood as a special case of the epistemic concept of the unpredictability of a process. This proposal arguably captures the intuitive desiderata for the concept of randomness; at least it should suggest (...)
    Remove from this list   Direct download (13 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  35. Algorithmic compression of empirical data: reply to Twardy, Gardner, and Dowe.James Mcallister - 2005 - Studies in History and Philosophy of Science Part A 36 (2):403-410.
    This discussion note responds to objections by Twardy, Gardner, and Dowe to my earlier claim that empirical data sets are algorithmically incompressible. Twardy, Gardner, and Dowe hold that many empirical data sets are compressible by Minimum Message Length technique and offer this as evidence that these data sets are algorithmically compressible. I reply that the compression achieved by Minimum Message Length technique is different from algorithmic compression. I conclude that Twardy, Gardner, and Dowe fail to establish that empirical data sets (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  36. Empirical data sets are algorithmically compressible: Reply to McAllister.Charles Twardy, Steve Gardner & David L. Dowe - 2005 - Studies in the History and Philosophy of Science, Part A 36 (2):391-402.
    James McAllister’s 2003 article, “Algorithmic randomness in empirical data” claims that empirical data sets are algorithmically random, and hence incompressible. We show that this claim is mistaken. We present theoretical arguments and empirical evidence for compressibility, and discuss the matter in the framework of Minimum Message Length (MML) inference, which shows that the theory which best compresses the data is the one with highest posterior probability, and the best explanation of the data.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  37. On Some Scientific Modalities: Propensities, Randomness and Causation.Antony Eagle - 2004 - Dissertation, Princeton University
    The essays that constitute this dissertation explore three strategies for understanding the role of modality in philosophical accounts of propensities, randomness, and causation. In Chapter 1, I discuss how the following essays are to be considered as illuminating the prospects for these strategies, which I call reductive essentialism, subjectivism and pragmatism. The discussion is framed within a survey of approaches to modality more broadly construed. ;In Chapter 2, I argue that any broadly dispositional analysis of probability as a physical property (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  38. Algorithmic randomness in empirical data.James W. McAllister - 2003 - Studies in History and Philosophy of Science Part A 34 (3):633-646.
    According to a traditional view, scientific laws and theories constitute algorithmic compressions of empirical data sets collected from observations and measurements. This article defends the thesis that, to the contrary, empirical data sets are algorithmically incompressible. The reason is that individual data points are determined partly by perturbations, or causal factors that cannot be reduced to any pattern. If empirical data sets are incompressible, then they exhibit maximal algorithmic complexity, maximal entropy and zero redundancy. They are therefore maximally efficient carriers (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  39. Algorithmic randomness in empirical data.James W. McAllister - 2003 - Studies in History and Philosophy of Science Part A 34 (3):633-646.
    According to a traditional view, scientific laws and theories constitute algorithmic compressions of empirical data sets collected from observations and measurements. This article defends the thesis that, to the contrary, empirical data sets are algorithmically incompressible. The reason is that individual data points are determined partly by perturbations, or causal factors that cannot be reduced to any pattern. If empirical data sets are incompressible, then they exhibit maximal algorithmic complexity, maximal entropy and zero redundancy. They are therefore maximally efficient carriers (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  40. Effective complexity as a measure of information content.James W. McAllister - 2003 - Philosophy of Science 70 (2):302-307.
    Murray Gell-Mann has proposed the concept of effective complexity as a measure of information content. The effective complexity of a string of digits is defined as the algorithmic complexity of the regular component of the string. This paper argues that the effective complexity of a given string is not uniquely determined. The effective complexity of a string admitting a physical interpretation, such as an empirical data set, depends on the cognitive and practical interests of investigators. The effective complexity of a (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  41. Algorithmic information theory and undecidability.Panu Raatikainen - 2000 - Synthese 123 (2):217-225.
    Chaitin’s incompleteness result related to random reals and the halting probability has been advertised as the ultimate and the strongest possible version of the incompleteness and undecidability theorems. It is argued that such claims are exaggerations.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  42. On interpreting Chaitin's incompleteness theorem.Panu Raatikainen - 1998 - Journal of Philosophical Logic 27 (6):569-586.
    The aim of this paper is to comprehensively question the validity of the standard way of interpreting Chaitin's famous incompleteness theorem, which says that for every formalized theory of arithmetic there is a finite constant c such that the theory in question cannot prove any particular number to have Kolmogorov complexity larger than c. The received interpretation of theorem claims that the limiting constant is determined by the complexity of the theory itself, which is assumed to be good measure of (...)
    Remove from this list   Direct download (8 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  43. Algorithmic information theory.Michiel van Lambalgen - 1989 - Journal of Symbolic Logic 54 (4):1389-1400.
    We present a critical discussion of the claim (most forcefully propounded by Chaitin) that algorithmic information theory sheds new light on Godel's first incompleteness theorem.
    Remove from this list   Direct download (8 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  44. Von Mises' definition of random sequences reconsidered.Michiel van Lambalgen - 1987 - Journal of Symbolic Logic 52 (3):725-755.
    We review briefly the attempts to define random sequences. These attempts suggest two theorems: one concerning the number of subsequence selection procedures that transform a random sequence into a random sequence; the other concerning the relationship between definitions of randomness based on subsequence selection and those based on statistical tests.
    Remove from this list   Direct download (9 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  45. On the Measure of Intelligence.François Chollet - manuscript
    To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark   3 citations  
  46. Implications of computer science theory for the simulation hypothesis.David Wolpert - manuscript
    The simulation hypothesis has recently excited renewed interest, especially in the physics and philosophy communities. However, the hypothesis specifically concerns {computers} that simulate physical universes, which means that to properly investigate it we need to couple computer science theory with physics. Here I do this by exploiting the physical Church-Turing thesis. This allows me to introduce a preliminary investigation of some of the computer science theoretic aspects of the simulation hypothesis. In particular, building on Kleene's second recursion theorem, I prove (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  47. Towards a stable definition of program-size complexity.Hector Zenil - unknown
    We propose a test based on the theory of algorithmic complexity and an experimental evaluation of Levin's universal distribution to identify evidence in support of or in contravention of the claim that the world is algorithmic in nature. To this end statistical comparisons are undertaken of the frequency distributions of data from physical sources--repositories of information such as images, data stored in a hard drive, computer programs and DNA sequences--and the output frequency distributions generated by purely algorithmic means--by running abstract (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  48. On the Kolmogorov-Chaitin complexity for short sequences.Hector Zenil - unknown
    This is a presentation about joint work between Hector Zenil and Jean-Paul Delahaye. Zenil presents Experimental Algorithmic Theory as Algorithmic Information Theory and NKS, put together in a mixer. Algorithmic Complexity Theory defines the algorithmic complexity k(s) as the length of the shortest program that produces s. But since finding this short program is in general an undecidable question, the only way to approach k(s) is to use compression algorithms. He shows how to use the Compress function in Mathematica to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   3 citations