Randomness in Recursion Theory and Effective Descriptive Set Theory

Project: Research project

Project Details


In this project, the principal investigator proposes to study the relationship between algorithmic randomness and logical complexity with respect to the transfinite hierarchies of recursion theory and effective descriptive set theory. The strength of a randomness notion can be increased by allowing more complicated tests, where the complexity of a test is measured in terms of the aforementioned hierarchies. A basic result due to Reimann and Slaman established that for any natural number there are only countably many reals not random for any continuous probability measure. One goal of the project is to find a topological or measure-theoretic characterization of these countable sets. Furthermore, Reimann will try to extend the results obtained for arithmetical randomness to higher notions of randomness, i.e. to tests having access to projective parameters, proving co-countability and correlating it with large cardinals in set theory. An essential tool for realizing these objectives is the construction of a measure relative to which a given real is random. The project aims at finding new ways to do this, in particular it investigates how methods from measure theory and analysis can be used. Finally, the principal investigator will study the relation between randomness for probability measures and non-finite measures, most prominently Hausdorff measures. Previous results indicate an intriguing difference between the two concepts with respect to computability theoretic hierarchies.

Randomness is a fundamental mathematical phenomenon whose understanding and investigation is one of the prime achievements of mathematics, and science in general, in the 20th century. Classical measure and probability theory do not allow for considering individual random objects, such as the outcome of a sequence (finite or infinite) of coin tosses. The theory of algorithmic randomness, as developed by Kolmogorov, Martin-Loef, Levin, and others, provides a uniform framework for defining such individual random content in a spectrum that spans from finite words to transfinite cardinals. It is inherent in our understanding of randomness that random objects should exhibit a rather high complexity, usually phrased as unpredictability or presence of chaos. Twentieth century logic, on the other hand, has put forth numerous hierarchies that capture the complexity of an object such as an infinite binary sequence with respect to its descriptive complexity, i.e. how hard it is to define or compute this object in a given mathematical theory such as first or second order arithmetic.

Reimann's main goal is to study

how logical complexity and randomness are intertwined. Previous research has shown that certain levels of logical complexity imply the presence of random content. The understanding of this relation, however, is far from complete. Among the principal questions Reimann tries to answer are: How is the the presence of randomness related to the definability strength of large cardinals? Are there other ways to capture the presence of random content within the hierarchies of logical complexity? Can in turn the logical complexity help to distinguish between randomness for different kinds of measures?

Effective start/end date7/1/086/30/10


  • National Science Foundation: $59,095.00


Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.