Hilbert–Bernays provability conditions explained

In mathematical logic, the Hilbert–Bernays provability conditions, named after David Hilbert and Paul Bernays, are a set of requirements for formalized provability predicates in formal theories of arithmetic (Smith 2007:224).

These conditions are used in many proofs of Kurt Gödel's second incompleteness theorem. They are also closely related to axioms of provability logic.

The conditions

Let be a formal theory of arithmetic with a formalized provability predicate, which is expressed as a formula of with one free number variable. For each formula in the theory, let be the Gödel number of . The Hilbert–Bernays provability conditions are:

  1. If proves a sentence then proves .
  2. For every sentence, proves
  3. proves that and imply

Note that is predicate of numbers, and it is a provability predicate in the sense that the intended interpretation of is that there exists a number that codes for a proof of . Formally what is required of is the above three conditions.

In the more concise notation of provability logic, letting

T\vdash\varphi

denote "

T

proves

\varphi

" and

\Box\varphi

denote

Prov(\#(\varphi))

:

(T\vdash\varphi)\to(T\vdash\Box\varphi)

T\vdash(\Box\phi\to\Box\Box\phi)

T\vdash(\Box(\varphi\to\psi)\to\Box\varphi\to\Box\psi)

Use in proving Gödel's incompleteness theorems

The Hilbert–Bernays provability conditions, combined with the diagonal lemma, allow proving both of Gödel's incompleteness theorems shortly. Indeed the main effort of Godel's proofs lied in showing that these conditions (or equivalent ones) and the diagonal lemma hold for Peano arithmetics; once these are established the proof can be easily formalized.

Using the diagonal lemma, there is a formula

\rho

such that

T\Vdash\rho\leftrightarrow\negProv(\#(\rho))

.

Proving Godel's first incompleteness theorem

For the first theorem only the first and third conditions are needed.

The condition that is ω-consistent is generalized by the condition that if for every formula, if proves, then proves . Note that this indeed holds for an -consistent because means that there is a number coding for the proof of, and if is -consistent then going through all natural numbers one can actually find such a particular number, and then one can use to construct an actual proof of in .

Suppose T could have proven

\rho

. We then would have the following theorems in :

T\Vdash\rho

T\Vdash\negProv(\#(\rho))

(by construction of

\rho

and theorem 1)

T\VdashProv(\#(\rho))

(by condition no. 1 and theorem 1)Thus proves both

Prov(\#(\rho))

and

\negProv(\#(\rho))

. But if is consistent, this is impossible, and we are forced to conclude that does not prove

\rho

.

Now let us suppose could have proven

\neg\rho

. We then would have the following theorems in :

T\Vdash\neg\rho

T\VdashProv(\#(\rho))

(by construction of

\rho

and theorem 1)

T\Vdash\rho

(by ω-consistency)Thus proves both

\rho

and

\neg\rho

. But if is consistent, this is impossible, and we are forced to conclude that does not prove

\neg\rho

.

To conclude, can prove neither

\rho

nor

\neg\rho

.

Using Rosser's trick

Using Rosser's trick, one needs not assume that is -consistent. However, one would need to show that the first and third provability conditions holds for, Rosser's provability predicate, rather than for the naive provability predicate Prov. This follows from the fact that for every formula, holds if and only if holds.

An additional condition used is that proves that implies . This condition holds for every that includes logic and very basic arithmetics (as elaborated in Rosser's trick#The Rosser sentence).

Using Rosser's trick, is defined using Rosser's provability predicate, instead of the naive provability predicate. The first part of the proof remains unchanged, except that the provability predicate is replaced with Rosser's provability predicate there, too.

The second part of the proof no longer uses ω-consistency, and is changed to the following:

Suppose could have proven

\neg\rho

. We then would have the following theorems in :

T\Vdash\neg\rho

T\VdashProvR(\#(\rho))

(by construction of

\rho

and theorem 1)

T\Vdash\negProvR(\#(\neg\rho))

(by theorem 2 and the additional condition following the definition of Rosser's provability predicate)

T\VdashProvR(\#(\neg\rho))

(by condition no. 1 and theorem 1)Thus proves both

ProvR(\#(\neg\rho))

and

\negProvR(\#(\neg\rho))

. But if is consistent, this is impossible, and we are forced to conclude that does not prove

\neg\rho

.

The second theorem

We assume that proves its own consistency, i.e. that:

T\Vdash\negProv(\#(1=0))

.For every formula :

T\Vdash\neg\varphi(\varphi(1=0))

(by negation elimination)

It is possible to show by using condition no. 1 on the latter theorem, followed by repeated use of condition no. 3, that:

T\VdashProv(\#(\neg\varphi))(Prov(\#(\varphi))Prov(\#(1=0)))

And using proving its own consistency it follows that:

T\VdashProv(\#(\neg\varphi))\negProv(\#(\varphi))

We now use this to show that is not consistent:

T\VdashProv(\#(\negProv(\#(\rho)))\negProv(\#(Prov(\#(\rho)))

(following proving its own consistency, with

\varphi=Prov(\#(\rho))

)

T\Vdash\rho\negProv(\#(\rho))

(by construction of

\rho

)

T\VdashProv(\#(\rho\negProv(\#(\rho)))

(by condition no. 1 and theorem 2)

T\VdashProv(\#(\rho))Prov(\#(\negProv(\#(\rho)))

(by condition no. 3 and theorem 3)

T\VdashProv(\#(\rho))\negProv(\#(Prov(\#(\rho)))

(by theorems 1 and 4)

T\VdashProv(\#(\rho))Prov(\#(Prov(\#(\rho)))

(by condition no. 2)

T\Vdash\negProv(\#(\rho))

(by theorems 5 and 6)

T\Vdash\negProv(\#(\rho))\rho

(by construction of

\rho

)

T\Vdash\rho

(by theorems 7 and 8)

T\VdashProv(\#(\rho))

(by condition 1 and theorem 9)Thus proves both

Prov(\#(\rho))

and

\negProv(\#(\rho))

, hence is inconsistent.

References