Algorithmic information theory (nonfiction)
Algorithmic information theory (AIT) is a "merger of information theory and computer science" that concerns itself with the relationship between computation and information of computably generated objects (as opposed to stochastically generated), such as strings or any other data structure. In other words, it is shown within algorithmic information theory that computational incompressibility "mimics" (except for a constant that only depends on the chosen universal programming language) the relations or inequalities found in information theory. According to Gregory Chaitin, it is "the result of putting Claude Shannon's information theory and Alan Turing's computability theory into a cocktail shaker and shaking vigorously."
Algorithmic information theory principally studies measures of irreducible information content of strings (or other data structures). Because most mathematical objects can be described in terms of strings, or as the limit of a sequence of strings, it can be used to study a wide variety of mathematical objects, including integers. Indeed, one of the main motivations behind AIT is the very study of the information carried by mathematical objects as in the field of metamathematics, e.g., as shown by the incompleteness results mentioned below. Other main motivations came from: surpassing the limitations of classical information theory for single and fixed objects; formalizing the concept of randomness; and finding accurate probabilistic inference without prior knowledge of the probability distribution (e.g., whether it is independent and identically distributed, markovian, or even stationary).
In this way, AIT is known to be basically found upon three main mathematical concepts and the relations between them:
- algorithmic complexity
- algorithmic randomness
- algorithmic probability
Besides the formalization of a universal measure for irreducible information content of computably generated objects, some main achievements of AIT were to show that: in fact algorithmic complexity follows (in the self-delimited case) the same inequalities (except for a constant) that entropy does, as in classical information theory; randomness is incompressibility; and, within the realm of randomly generated software, the probability of occurrence of any data structure is of the order of the shortest program that generates it when running on a universal machine.
See also
- Algorithmic probability (nonfiction)
- Algorithmically random sequence (nonfiction)
- Chaitin's constant (nonfiction)
- Chaitin–Kolmogorov randomness (nonfiction)
- Computationally indistinguishable (nonfiction)
- Distribution ensemble (nonfiction)
- Epistemology (nonfiction)
- Inductive inference (nonfiction)
- Inductive probability (nonfiction)
- Invariance theorem (nonfiction)
- Kolmogorov complexity (nonfiction) - given an object, such as a piece of text, Kolmogorov complexity the length of the shortest computer program (in a predetermined programming language) that produces the object as output. It is a measure of the computational resources needed to specify the object.
- Limits of knowledge (nonfiction)
- Minimum description length (nonfiction)
- Minimum message length (nonfiction)
- Pseudorandom ensemble (nonfiction)
- Pseudorandom generator (nonfiction)
- Claude Shannon (nonfiction)
- Simplicity theory (nonfiction)
- Solomonoff's theory of inductive inference (nonfiction)
- Alan Turing (nonfiction)
- Uniform ensemble (nonfiction)
- Algorithmic information theory @ Wikipedia