Heyting arithmetic explained

In mathematical logic, Heyting arithmetic

{HA

} is an axiomatization of arithmetic in accordance with the philosophy of intuitionism.[1] It is named after Arend Heyting, who first proposed it.

Axiomatization

{PA

}, except that it uses the intuitionistic predicate calculus

{IQC

} for inference. In particular, this means that the double-negation elimination principle, as well as the principle of the excluded middle

{PEM

}, do not hold. Note that to say

{PEM

} does not hold exactly means that the excluded middle statement is not automatically provable for all propositions—indeed many such statements are still provable in

{HA

} and the negation of any such disjunction is inconsistent.

{PA

} is strictly stronger than

{HA

} in the sense that all

{HA

}-theorems are also

{PA

}-theorems.

Heyting arithmetic comprises the axioms of Peano arithmetic and the intended model is the collection of natural numbers

{N}

. The signature includes zero "

0

" and the successor "

S

", and the theories characterize addition and multiplication. This impacts the logic: With

1:=S0

, it is a metatheorem that

\bot

can be defined as

0=1

and so that

\negP

is

P\to\bot

for every proposition

P

. The negation of

\bot

is of the form

P\toP

and thus a trivial proposition.

For terms, write

st

for

\neg(s=t)

.For a fixed term

t

, the equality

t=t

is true by reflexivity and a proposition

P

is equivalent to

(t=t)\toP

. It may be shown that

P\lorQ

can then be defined as

\existsn.(n=0\toP)\land(n0\toQ)

. This formal elimination of disjunctions was not possible in the quantifier-free primitive recursive arithmetic

{PRA

}. The theory may be extended with function symbols for any primitive recursive function, making

{PRA

} also a fragment of this theory. For a total function

f

, one often considers predicates of the form

f(n)=0

.

Theorems

Double negations

With explosion valid in any intuitionistic theory, if

\neg\negP

is a theorem for some

P

, then by definition

\negP

is provable if and only if the theory is inconsistent. Indeed, in Heyting arithmetic the double-negation explicitly expresses

\negP\to0=1

. For a predicate

Q

, a theorem of the form

\neg\neg\existsn.Q(n)

expresses that it is inconsistent to rule out that

Q(t)

could be validated for some

t

. Constructively, this is weaker than the existence claim of such a

t

.A big part of the metatheoretical discussion will concern classically provable existence claims.

A double-negation

\neg\negP

entails

(\alpha\to\negP)\to(\alpha\to0=1)

. So a theorem of the form

\neg\negP

also always gives new means to conclusively reject (also positive) statements

\alpha

.

Proofs of classically equivalent statements

Recall that the implication in

HA\vdash\alpha\to\neg\neg\alpha

can classically be reversed, and with that so can the one in

HA\vdash(\existsn.\neg\psi(n))\to\neg\foralln.\psi(n)

. Here the distinction is between existence of numerical counter-examples versus absurd conclusions when assuming validity for all numbers. Inserting double-negations turns

{PA

}-theorems into

{HA

}-theorems. More exactly, for any formula provable in

{PA

}, the classically equivalent Gödel–Gentzen negative translation of that formula is already provable in

{HA

}. In one formulation, the translation procedure includes a rewriting of

(\existsn.Q(n))N

to

\neg\foralln.\negQN(n).

The result means that all Peano arithmetic theorems have a proof that consists of a constructive proof followed by a classical logical rewriting. Roughly, the final step amounts to applications of double-negation elimination.

In particular, with undecidable atomic propositions being absent, for any proposition

\psi

not including existential quantifications or disjunctions at all, one has

{PA

}\vdash\psi \iff \vdash \psi.

Valid principles and rules

Minimal logic proves double-negation elimination for negated formulas,

\neg\neg(\neg\alpha)\leftrightarrow(\neg\alpha)

. More generally, Heyting arithmetic proves this classical equivalence for any Harrop formula.

And

0
\Sigma
1
-results are well behaved as well: Markov's rule at the lowest level of the arithmetical hierarchy is an admissible rule of inference, i.e. for

\varphi

with

n

free,

{HA

}\vdash \neg\neg\exists m. \varphi(n, m)\iff \vdash \exists m. \varphi(n, m)Instead of speaking of quantifier-free predicates, one may equivalently formulate this for primitive recursive predicate or Kleene's T predicate, called

{MR

}_\mathrm, resp.

{MRPR

} and

{MR0}

.Even the related rule

{MR

}_\mathrm is admissible, in which the tractability aspect of

\varphi

is not e.g. based on a syntactic condition but where the left hand side also requires

\vdash\forallm.\varphi(n,m)\lor\neg\varphi(n,m)

.

Beware that in classifying a proposition based on its syntactic form, one ought not mistakenly assign a lower complexity based on some only classical valid equivalence.

Excluded middle

As with other theories over intuitionistic logic, various instances of

{PEM

} can be proven in this constructive arithmetic. By disjunction introduction, whenever either the proposition

P

or

\negP

is proven, then

P\lor\negP

is proven as well. So for example, equipped with

0=0

and

\foralln.Sn0

from the axioms, one may validate the premise for induction of excluded middle for the predicate

n=0

. One then says equality to zero is decidable. Indeed,

{HA

} proves equality "

=

" decidable for all numbers, i.e.

\foralln.\forallm.(n=m\lornm)

. Stronger yet, as equality is the only predicate symbol in Heyting arithmetic, it then follows that, for any quantifier-free formula

\phi

, where

n,...,m

are the free variables, the theory is closed under the rule

{HA

}\vdash \forall n. \cdots \forall m.\,\phi(n, \dots, m)\lor\neg\phi(n, \dots, m)Any theory over minimal logic proves

\neg\neg(P\lor\negP)

for all propositions. So if the theory is consistent, it never proves the negation of an excluded middle statement.

Practically speaking, in rather conservative constructive frameworks such as

{HA

}, when it is understood what type of statements are algorithmically decidable, then an unprovability result of an excluded middle disjunction expresses the algorithmic undecidability of

P

.

Conservativity

For simple statements, the theory does not just validate such classically valid binary dichotomies

\phi(n,...)\lor\neg\phi(n,...)

. The Friedman translation can be used to establish that

{PA

}'s
0
\Pi
2
-theorems are all proven by

{HA

}: For any

n

and quantifier-free

\varphi

,

{PA

}\vdash \exists m. \varphi(n, m)\iff \vdash \exists m. \varphi(n, m)This result may of course also be expressed with explicit universal closures

\foralln

. Roughly, simple statements about computable relations provable classically are already provable constructively. Although in halting problems, not just quantifier-free propositions but also
0
\Pi
1
-propositions play an important role, and as will be argued these can be even classically independent. Similarly, already unique existence

\exists!n.Q(n)

in an infinite domain, i.e.

\existsn.\forallw.(n=w\leftrightarrowQ(w))

, is formally not particularly simple.

So

{PA

} is
0
\Pi
2
-conservative over

{HA

}. For contrast, while the classical theory of Robinson arithmetic

{Q

} proves all
0
\Sigma
1
-

{PA

}-theorems, some simple
0
\Pi
1
-

{PA

}-theorems are independent of it. Induction also plays a crucial role in Friedman's result: For example, the more workable theory obtained by strengthening

{Q

} with axioms about ordering, and optionally decidable equality, does prove more
0
\Pi
2
-statements than its intuitionistic counterpart.

The discussion here is by no means exhaustive. There are various results for when a classical theorem is already entailed by the constructive theory. Also note that it can be relevant what logic was used to obtain metalogical results. For example, many results on realizability were indeed obtained in a constructive metalogic. But when no specific context is given, stated results need to be assumed to be classical.

Unprovable statements

Independence results concern propositions such that neither they, nor their negations can be proven in a theory. If the classical theory is consistent (i.e. does not prove

\bot

) and the constructive counterpart does not prove one of its classical theorems

P

, then that

P

is independent of the latter. Given some independent propositions, it is easy to define more from them, especially in a constructive framework.

{DP

}: For all propositions

\alpha

and

\beta

,[2]

{HA

}\vdash\alpha\lor\beta\iff\vdash\alpha\text\vdash\betaIndeed, this and its numerical generalization are also exhibited by constructive second-order arithmetic and common set theories such as

{CZF

} and

{IZF

}. It is a common desideratum for the informal notion of a constructive theory. Now in a theory with

{DP

}, if a proposition

P

is independent, then the classically trivial

P\lor\negP

is another independent proposition, and vice versa. A schema is not valid if there is at least one instance that cannot be proven, which is how

{PEM

} comes to fail. One may break

{DP

} by adopting an excluded middle statement axiomatically without validating either of the disjuncts, as is the case in

{PA

}.

More can be said: If

P

is even classically independent, then also the negation

\negP

is independent—this holds whether or not

\neg\negP

is equivalent to

P

. Then, constructively, the weak excluded middle

{WPEM

} does not hold, i.e. the principle that

\neg\alpha\lor\neg\neg\alpha

would hold for all propositions is not valid. If such

P

is
0
\Sigma
1
, unprovability of the disjunction manifests the breakdown of
0
\Pi
1
-

{PEM

}, or what amounts to an instance of

{WLPO

}
for a primitive recursive function.

Classically independent propositions

Knowledge of Gödel's incompleteness theorems aids in understanding the type of statements that are

{PA

}-provable but not

{HA

}-provable.

The resolution of Hilbert's tenth problem provided some concrete polynomials

f

and corresponding polynomial equations, such that the claim that the latter have a solution is algorithmically undecidable. The proposition can be expressed as

\existsw1.\existswn.f(w1,...,wn)=0

Certain such zero value existence claims have a more particular interpretation: Theories such as

{PA

} or

{ZFC

} prove that these propositions are equivalent to the arithmetized claim of the theories own inconsistency. Thus, such propositions can even be written down for strong classical set theories.

In a consistent and sound arithmetic theory, such an existence claim

{If}

is an independent
0
\Sigma
1
-proposition. Then

{Cf}:=\neg{If}

, by pushing a negation through the quantifier, is seen to be an independent Goldbach-type- or
0
\Pi
1
-proposition. To be explicit, the double-negation

\neg\neg{If}

(or

\neg{Cf}

) is also independent. And any triple-negation is, in any case, already intuitionistically equivalent to a single negation.

PA violates DP

The following illuminates the meaning involved in such independent statements. Given an index in an enumeration of all proofs of a theory, one can inspect what proposition it is a proof of.

{PA

} is adequate in that it can correctly represent this procedure: there is a primitive recursive predicate

F(w):=Prf(w,\ulcorner0=1\urcorner)

expressing that a proof is one of the absurd proposition

0=1

. This relates to the more explicitly arithmetical predicate above, about a polynomial's return value being zero. One may metalogically reason that if

{PA

} is consistent, then it indeed proves

\negF({\underline{w}})

for every individual index

{w}

.

In an effectively axiomatized theory, one may successively perform an inspection of each proof. If a theory is indeed consistent, then there does not exist a proof of an absurdity, which corresponds to the statement that the mentioned "absurdity search" will never halt. Formally in the theory, the former is expressed by the proposition

\neg\existsw.F(w)

, negating the arithmetized inconsistency claim. The equivalent
0
\Pi
1
-proposition

\forallw.\negF(w)

formalizes the never halting of the search by stating that all proofs are not a proof of an absurdity. And indeed, in an omega-consistent theory that accurately represents provability, there is no proof that the absurdity search would ever conclude by halting (explicit inconsistency not derivable), nor—as shown by Gödel—can there be a proof that the absurdity search would never halt (consistency not derivable). Reformulated, there is no proof that the absurdity search never halts (consistency not derivable), nor is there a proof that the absurdity search does not never halt (consistency not rejectible).To reiterate, neither of these two disjuncts is

{PA

}-provable, while their disjunction is trivially

{PA

}-provable. Indeed, if

{PA

} is consistent then it violates

{DP

}.

The

0
\Sigma
1
-proposition expressing the existence of a proof of

0=1

is a logically positive statement. Nonetheless, it is historically denoted

\neg{Con

}_, while its negation is a
0
\Pi
1
-proposition denoted by

{Con

}_. In a constructive context, this use of the negation sign may be misleading nomenclature.

Friedman established another interesting unprovable statement, namely that a consistent and adequate theory never proves its arithmetized disjunction property.

Unprovable classical principles

Already minimal logic logically proves all non-contradiction claims, and in particular

\neg({If}\land\neg{If})

and

\neg({Cf}\land\neg{Cf})

. Since also

({Cf}\land\neg{Cf})\leftrightarrow\neg({If}\lor\neg{If})

, the theorem

\neg({Cf}\land\neg{Cf})

may be read as a provable double-negated excluded middle disjunction (or existence claim). But in light of the disjunction property, the plain excluded middle

{Cf}\lor\neg{Cf}

cannot be

{HA

}-provable. So one of the De Morgan's laws cannot intuitionistically hold in general either.

The breakdown of the principles

{WPEM

} and

{WLPO

} have been explained.Now in

{PA

}, the least number principle

{LNP

} is just one of many statements equivalent to the induction principle. The proof below shows how

{LNP

} implies

{PEM

}, and therefore why this principle also cannot be generally valid in

{HA

}. However, the schema granting double-negated least number existence for every non-trivial predicate, denoted

\neg\neg{LNP

}, is generally valid. In light of Gödel's proof, the breakdown of these three principles can be understood as Heyting arithmetic being consistent with the provability reading of constructive logic.

Markov's principle for primitive recursive predicates

{MPPR

} does already not hold as an implication schema for

{HA

}, let alone the strictly stronger

{MPDec

}. Although in the form of the corresponding rules, they are admissible, as mentioned.Similarly, the theory does not prove the independence of premise principle

{IP

} for negated predicates, albeit it is closed under the rule for all negated propositions, i.e. one may pull out the existential quantifier in

\negP\to\existsn.Q(n)

. The same holds for the version where the existential statement is replaced by a mere disjunction.

The valid implication

\neg\neg(\alpha\to\beta)\to(\alpha\to\neg\neg\beta)

can be proven to hold also in its reversed form, using the disjunctive syllogism. However, the double-negation shift

{DNS

} is not intuitionistically provable, i.e. the schema of commutativity of "

\neg\neg

" with universal quantification over all numbers. This is an interesting breakdown that is explained by the consistency of

\neg{DecM}

for some

M

, as discussed in the section on Church's thesis.

Least number principle

Making use of the order relation on the naturals, the strong induction principle reads

\foralln.((\forall(k<n).\phi(k))\to\phi(n))\to\forallm.\phi(m)

In class notation, as familiar from set theory, an arithmetic statement

Q(n)

is expressed as

n\inB

where

B:=\{m\in{N}\midQ(m)\}

. For any given predicate of negated form, i.e.

\phi(n):=\neg(n\inB)

, a logical equivalent to induction is

\neg\exists(n\inB).\neg\exists(k\inB).k<n\leftrightarrowB=\{\}

The insight is that among subclasses

B\subseteq{N}

, the property of (provably) having no least member is equivalent to being uninhabited, i.e. to being the empty class. Taking the contrapositive results in a theorem expressing that for any non-empty subclass, it cannot consistently be ruled out that there exists a member

n\inB

such that there is no member

k\inB

smaller than

n

:

B\{\}\to\neg\neg\exists(n\inB).\neg\exists(k\inB).k<n

In Peano arithmetic, where double-negation elimination is always valid, this proves the least number principle in its common formulation. In the classical reading, being non-empty is equivalent to (provably) being inhabited by some least member.

A binary relation "

<

" that validates the strong induction schema in the above form is always also irreflexive: Considering

\phic(n):=(nc)

or equivalently

Qc(n):=(n=c)

for some fixed number

c

, the above simplifies to the statement that no member

k

of

B=\{c\}

validates

k<c

, which is to say

\neg(c<c)

. (And this logical deduction did not even use any other property of the binary relation.)More generally, if

B

is non-empty and the associated (classical) least number principle can be used to prove some statement of negated form (such as

\neg(c<c)

), then one can extend this to a fully constructive proof. This is because implications

\alpha\to\neg\beta

are always intutionistically equivalent to the formally stronger

(\neg\neg\alpha)\to\neg\beta

.

But in general, over constructive logic, the weakening of the least number principle can not be lifted. The following example demonstrates this: For some proposition

P

(say

{Cf}

as above), consider the predicate

QP(n):=(n=0\landP)\lor(n=1)

This

QP

corresponds to a subclass

bP:=\{z\in\{0\}\midP\}\cup\{1\}

of the natural numbers. One may ask what the least member of this class may be. Any number proven or assumed to be in this class provably either equals

0

or

1

, i.e.

bP\subseteq\{0,1\}

. Using decidability of equality and the disjunctive syllogism proves the equivalence

P\leftrightarrowQP(0)

. If the underlying proposition

P

is independent, then the predicate is undecidable in the theory. Now since

1=1

, the proposition

QP(1)

is trivially true and so the class is inhabited:

1\inbP

. In particular, least number existence for this class cannot be rejected. Given a conjunction with

QP(1)

is trivial, the existence claim of a least number validating

QP

itself translates to the excluded middle statement for

P

. Knowledge of such a number's value in fact determines whether or not

P

holds. So for independent

P

, the least number principle instance with

QP

is also independent of

{HA

}.

In set theory notation,

P

is equivalent to

0\inbP

and so

bP=\{0,1\}

, while its negation is also equivalent to

bP=\{1\}

. This demonstrates that elusive predicates can define elusive subsets. And so also in constructive set theory, while the standard order on the class of naturals is decidable, the naturals are not well-ordered. But strong induction principles, that constructively do not imply unrealizable existence claims, are also still available.

Anti-classical extensions

In a computable context, for a predicate

M

, the classically trivial infinite disjunction

\foralln.(M(n)\lor\negM(n))

, also written

{Dec

}_M, can be read as a validation of the decidability of a decision problem. In class notation,

M(n)

is also written

n\inM

.

{HA

} proves no propositions not provable by

{PA

} and so, in particular, it does not reject any theorems of the classical theory. But there are also predicates

M

such that the axioms

{HA

}+\neg_M are consistent. Again, constructively, such a negation is not equivalent to the existence of a particular numerical counter-example

t

to excluded middle for

M(t)

. Indeed, already minimal logic proves the double-negated excluded middle for all propositions, and so

\foralln.\neg\neg(Q(n)\lor\negQ(n))

, which is equivalent to

\neg\existsn.\neg(Q(n)\lor\negQ(n))

, for any predicate

Q

.

Church's thesis

Church's rule is an admissible rule in

{HA

}. Church's thesis principle

{CT0}

may be adopted in

{HA

}, while

{PA

} rejects it: It implies negations such as the ones just described.

Consider the principle in the form stating that all predicates that are decidable in the logic sense above are also decidable by a total computable function. To see how it is in conflict with excluded middle, one merely needs to define a predicate that is not computably decidable.To this end, write

\Deltae(w):=T1(e,e,w)

for predicates defined from Kleene's T predicate. The indices

e

of total computable functions fulfill

\forallx.\existsw.T1(e,x,w)

. While

T1

can be realized in a primitive recursive fashion, the predicate

\existsw.\Deltae(w)

in

e

, i.e. the class

H:=\{e\mid\existsw.\Deltae(w)\}

of partial computable function indices with a witness describing how they halt on the diagonal, is computably enumerable but not computable. The classical compliment

M

defined using

\neg\existsw.\Deltae(w)

is not even computably enumerable, see halting problem. This provenly undecidable problem

e\inM

provides a violating example. For any index

e

, the equivalent form

\forallw.\neg\Deltae(w)

expresses that when the corresponding function is evaluated (at

e

), then all conceivable descriptions of evaluation histories (

w

) do not describe the evaluation at hand. Specifically, this not being decidable for functions establishes the negation of what amounts to

{WLPO

}.

The formal Church's principles are associated with the recursive school, naturally. Markov's principle

{MPDec

} is commonly adopted, by that school and by constructive mathematics more broadly. In the presence of Church's principle,

{MPDec

} is equivalent to its weaker form

{MPPR

}. The latter can generally be expressed as a single axiom, namely double-negation elimination for any

\neg\neg\existsw.T1(e,x,w)

. Heyting arithmetic together with both

{MPDec

}+

{ECT0}

prove independence of premise for decidable predicates,

{IP

}_0. But they do not go together, consistently, with

{IP

}.

{CT0}

also negates

{DNS

}. The intuitionist school of L. E. J. Brouwer extends Heyting arithmetic by a collection of principles that negate both

{PEM

} as well as

{CT0}

.

Models

Consistency

If a theory is consistent, then no proof is one of an absurdity. Kurt Gödel introduced the negative translation and proved that if Heyting arithmetic is consistent, then so is Peano arithmetic. That is to say, he reduced the consistency task for

{PA

} to that of

{HA

}. However, Gödel's incompleteness theorems, about the incapability of certain theories to prove their own consistency, also applies to Heyting arithmetic itself.

The standard model of the classical first-order theory

{PA

} as well as any of its non-standard models is also a model for Heyting arithmetic

{HA

}.

Set theory

There are also constructive set theory models for full

{HA

} and its intended semantics. Relatively weak set theories suffice: They shall adopt the Axiom of infinity, the Axiom schema of predicative separation to prove induction of arithmetical formulas in

\omega

, as well as the existence of function spaces on finite domains for recursive definitions. Specifically, those theories do not require

{PEM

}, the full axiom of separation or set induction (let alone the axiom of regularity), nor general function spaces (let alone the full axiom of power set).

{HA

} furthermore is bi-interpretable with a weak constructive set theory in which the class of ordinals is

\omega

, so that the collection of von Neumann naturals do not exist as a set in the theory. Meta-theoretically, the domain of that theory is as big as the class of its ordinals and essentially given through the class

{Fin

} of all sets that are in bijection with a natural

n\in\omega

. As an axiom this is called

V={Fin

} and the other axioms are those related to set algebra and order: Union and Binary Intersection, which is tightly related to the Predicative Separation schema, Extensionality, Pairing, and the Set induction schema. This theory is then already identical with the theory given by

{CZF

} without Strong Infinity and with the finitude axiom added. The discussion of

{HA

} in this set theory is as in model theory. And in the other direction, the set theoretical axioms are proven with respect to the primitive recursive relation

x\iny\iff\exists(r<2x).\exists(s<y).(2x(2s+1)+r=y)

That small universe of sets can be understood as the ordered collection of finite binary sequences which encode their mutual membership. For example, the

1002

'th set contains one other set and the

1101012

'th set contains four other sets. See BIT predicate.

Realizability

For some number

n

in the metatheory, the numeral in the studied object theory is denoted by

\underline{n

}.

In intuitionistic arithmetics, the disjunction property

{DP

} is typically valid. And it is a theorem that any c.e. extension of arithmetic for which it holds also has the numerical existence property

{NEP

}:

{HA

}\vdash\exists n. \psi(n)\iff\text\text\vdash\psiSo these properties are metalogical equivalent in Heyting arithmetic. The existence and disjunction property in fact still holds when relativizing the existence claim by a Harrop formula

\alpha

, i.e. for provable

\alpha\to\existsn.\psi(n)

.

Kleene, a student of Church, introduced important realizability models of the Heyting arithmetic.In turn, his student Nels David Nelson established (in an extension of

{HA

}) that all closed theorems of

{HA

} (meaning all variables are bound) can be realized. Inference in Heyting arithmetic preserves realizability. Moreover, if

{HA

}\vdash \forall n. \exists m. \varphi(n, m) then there is a partial recursive function realizing

\varphi

in the sense that whenever the function evaluated at

{n

} terminates with

{m

}, then

{HA

}\vdash \varphi. This can be extended to any finite number of function arguments

{n

}. There are also classical theorems that are not

{HA

}-provable but do have a realization.

Typed versions of realizability have been introduced by Georg Kreisel. With it he demonstrated the independence of the classically valid Markov's principle for intuitionistic theories.

See also BHK interpretation and Dialectica interpretation.

In the effective topos, already the finitely axiomizable subsystem of Heyting arithmetic with induction restricted to

\Sigma1

is categorical. Categoricity here is reminiscent of Tennenbaum's theorem. The model validates

{HA

} but not

{PA

} and so completeness fails in this context.

Type theory

Type theoretical realizations mirroring inference rules based logic formalizations have been implemented in various languages.

Extensions

Heyting arithmetic has been discussed with potential function symbols added for primitive recursive functions. That theory proves the Ackermann function total.

Beyond this, axiom and formalism selection always has been a debate even within the constructivist circle. Many typed extensions of

{HA

} have been extensively studied in proof theory, e.g. with types of functions between numbers, and function between those, etc. The formalities naturally become more complicated, with different possible axioms governing the application of functions. The class of functions being total can be enriched this way. The theory with finite types

{HA

}^\omega when further combined with function extensionality plus an axiom of choice in

{N}N

still proves the same arithmetic formulas as just

{HA

} and has a type theoretic interpretation. However, that theory rejects Church’s thesis for

{N}N

as well as that all functions in

{N}N\to{N}

would be continuous. But adopting, say, different extensionality rules, choice axioms, Markov's and independence principles and even the Kőnig's lemma—all together but each at specific strength or levels—one can define rather "stuffed" arithmetics that may still fail to prove excluded middle at the level of
0
\Pi
1
-formulas. Early on, also variants with intensional equality and Brouwerian choice sequence have been investigated.

Reverse mathematics studies of constructive second-order arithmetic have been performed.[3]

History

Formal axiomatization of the theory trace back to Heyting (1930), Herbrand and Kleene. Gödel proved the consistency result concerning

{PA

} in 1933.

Related concepts

Heyting arithmetic should not be confused with Heyting algebras, which are the intuitionistic analogue of Boolean algebras.

See also

References

External links

"Intuitionistic Number Theory" by Joan Moschovakis.

Notes and References

  1. Troelstra 1973:18
  2. , pp. 240-249
  3. 1804.05495. Constructive Reverse Mathematics. math.LO. Diener. Hannes. 2020.