Markov's principle (also known as the Leningrad principle[1]), named after Andrey Markov Jr, is a conditional existence statement for which there are many equivalent formulations, as discussed below. The principle is logically valid classically, but not in intuitionistic constructive mathematics. However, many particular instances of it are nevertheless provable in a constructive context as well.
The principle was first studied and adopted by the Russian school of constructivism, together with choice principles and often with a realizability perspective on the notion of mathematical function.
In the language of computability theory, Markov's principle is a formal expression of the claim that if it is impossible that an algorithm does not terminate, then for some input it does terminate. This is equivalent to the claim that if a set and its complement are both computably enumerable, then the set is decidable. These statements are provable in classical logic.
In predicate logic, a predicate P over some domain is called decidable if for every x in the domain, either P(x) holds, or the negation of P(x) holds. This is not trivially true constructively.
Markov's principle then states: For a decidable predicate P over the natural numbers, if P cannot be false for all natural numbers n, then it is true for some n. Written using quantifiers:
(\foralln(P(n)\vee\negP(n))\wedge\neg\foralln\negP(n)) → \existsnP(n)
Markov's rule is the formulation of Markov's principle as a rule. It states that
\existsn P(n)
\neg\neg\existsn P(n)
P
\foralln(P(n)\lor\negP(n)), \neg\neg\existsn P(n) \vdash \existsn P(n)
Anne Troelstra[2] proved that it is an admissible rule in Heyting arithmetic. Later, the logician Harvey Friedman showed that Markov's rule is an admissible rule in first-order intuitionistic logic, Heyting arithmetic, and various other intuitionistic theories,[3] using the Friedman translation.
Markov's principle is equivalent in the language of arithmetic to:
\neg\neg\existsn f(n)=0 → \existsn f(n)=0
f
In the presence of Church's thesis principle, the principle is equivalent to its form for primitive recursive functions. Using Kleene's T predicate, the latter may be expressed as
\foralle \forallx (\neg\neg\existsw T1(e,x,w) → \existsw T1(e,x,w))
If constructive arithmetic is translated using realizability into a classical meta-theory that proves the \omega
P
P(0),P(1),P(2),...
P
\omega
P
P
P
If instead the realizability interpretation is used in a constructive meta-theory, then it is not justified. Indeed, for first-order arithmetic, Markov's principle exactly captures the difference between a constructive and classical meta-theory. Specifically, a statement is provable in Heyting arithmetic with extended Church's thesis if and only if there is a number that provably realizes it in Heyting arithmetic; and it is provable in Heyting arithmetic with extended Church's thesis and Markov's principle if and only if there is a number that provably realizes it in Peano arithmetic.
Markov's principle is equivalent, in the language of real analysis, to the following principles:
Modified realizability does not justify Markov's principle, even if classical logic is used in the meta-theory: there is no realizer in the language of simply typed lambda calculus as this language is not Turing-complete and arbitrary loops cannot be defined in it.
The weak Markov's principle is a weaker form of the principle. It may be stated in the language of analysis, as a conditional statement for the positivity of a real number:
\forall(x\inR)(\forall(y\inR)(\neg\neg(0<y)\lor\neg\neg(y<x)))\to(0<x).
{IZF
To understand what the principle is about, it helps to inspect a stronger statement. The following expresses that any real number
x
y
\nexists(y\le0)x\ley\to(0<x),
x\leqy
y<x
\negA\lor\negB
Assuming classical double-negation elimination, the weak Markov's principle becomes trivial, expressing that a number larger than all non-positive numbers is positive.
A function
f:X\toY
d(f(x),f(y))>0
d(x,y)>0