Algorithmic randomness in empirical data

Studies in History and Philosophy of Science Part A 34 (3):633-646 (2003)
  Copy   BIBTEX


According to a traditional view, scientific laws and theories constitute algorithmic compressions of empirical data sets collected from observations and measurements. This article defends the thesis that, to the contrary, empirical data sets are algorithmically incompressible. The reason is that individual data points are determined partly by perturbations, or causal factors that cannot be reduced to any pattern. If empirical data sets are incompressible, then they exhibit maximal algorithmic complexity, maximal entropy and zero redundancy. They are therefore maximally efficient carriers of information about the world. Since, on algorithmic information theory, a string is algorithmically random just if it is incompressible, the thesis entails that empirical data sets consist of algorithmically random strings of digits. Rather than constituting compressions of empirical data, scientific laws and theories pick out patterns that data sets exhibit with a certain noise.



    Upload a copy of this work     Papers currently archived: 92,347

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library


Added to PP

50 (#320,435)

6 months
13 (#201,871)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

James McAllister
Leiden University

Citations of this work

Simplicity, Language-Dependency and the Best System Account of Laws.Billy Wheeler - 2014 - Theoria : An International Journal for Theory, History and Fundations of Science 31 (2):189-206.
Humeanism and Exceptions in the Fundamental Laws of Physics.Billy Wheeler - 2017 - Principia: An International Journal of Epistemology 21 (3):317-337.

View all 7 citations / Add more citations

References found in this work

No references found.

Add more references