Publication 4788

McAllister J. W. (2003) Algorithmic randomness in empirical data. Studies in the History and Philosophy of Science 34: 633–646.
According to a traditional view, scientific laws and theories constitute algorithmic compressions of empirical data sets collected from observations and measurements. This article defends the thesis that, to the contrary, empirical data sets are algorithmically incompressible. The reason is that individual data points are determined partly by perturbations, or causal factors that cannot be reduced to any pattern. If empirical data sets are incompressible, then they exhibit maximal algorithmic complexity, maximal entropy and zero redundancy. They are therefore maximally efficient carriers of information about the world. Since, on algorithmic information theory, a string is algorithmically random just if it is incompressible, the thesis entails that empirical data sets consist of algorithmically random strings of digits. Rather than constituting compressions of empirical data, scientific laws and theories pick out patterns that data sets exhibit with a certain noise.
We will upload a full textversion shortly.

The publication has not yet bookmarked in any reading list

You cannot bookmark this publication into a reading list because you are not member of any
Log in to create one.

There are currently no annotations

To add an annotation you need to log in first

Download statistics

Log in to view the download statistics for this publication
Export bibliographic details as: CF Format · APA · BibTex · EndNote · Harvard · MLA · Nature · RIS · Science