In mathematical logic, a sequent is a very general kind of conditional assertion.
A1,...,Am\vdashB1,...,Bn.
A sequent may have any number m of condition formulas Ai (called "antecedents") and any number n of asserted formulas Bj (called "succedents" or "consequents"). A sequent is understood to mean that if all of the antecedent conditions are true, then at least one of the consequent formulas is true. This style of conditional assertion is almost always associated with the conceptual framework of sequent calculus.
Sequents are best understood in the context of the following three kinds of logical judgments:
The word "OR" here is the inclusive OR.[1] The motivation for disjunctive semantics on the right side of a sequent comes from three main benefits.
All three of these benefits were identified in the founding paper by .
Not all authors have adhered to Gentzen's original meaning for the word "sequent". For example, used the word "sequent" strictly for simple conditional assertions with one and only one consequent formula.[2] The same single-consequent definition for a sequent is given by .
In a general sequent of the form
\Gamma\vdash\Sigma
The symbol '
\vdash
Since every formula in the antecedent (the left side) must be true to conclude the truth of at least one formula in the succedent (the right side), adding formulas to either side results in a weaker sequent, while removing them from either side gives a stronger one. This is one of the symmetry advantages which follows from the use of disjunctive semantics on the right hand side of the assertion symbol, whereas conjunctive semantics is adhered to on the left hand side.
In the extreme case where the list of antecedent formulas of a sequent is empty, the consequent is unconditional. This differs from the simple unconditional assertion because the number of consequents is arbitrary, not necessarily a single consequent. Thus for example, ' ⊢ B1, B2 ' means that either B1, or B2, or both must be true. An empty antecedent formula list is equivalent to the "always true" proposition, called the "verum", denoted "⊤". (See Tee (symbol).)
In the extreme case where the list of consequent formulas of a sequent is empty, the rule is still that at least one term on the right be true, which is clearly impossible. This is signified by the 'always false' proposition, called the "falsum", denoted "⊥". Since the consequence is false, at least one of the antecedents must be false. Thus for example, ' A1, A2 ⊢ ' means that at least one of the antecedents A1 and A2 must be false.
One sees here again a symmetry because of the disjunctive semantics on the right hand side. If the left side is empty, then one or more right-side propositions must be true. If the right side is empty, then one or more of the left-side propositions must be false.
The doubly extreme case ' ⊢ ', where both the antecedent and consequent lists of formulas are empty is "not satisfiable".[3] In this case, the meaning of the sequent is effectively ' ⊤ ⊢ ⊥ '. This is equivalent to the sequent ' ⊢ ⊥ ', which clearly cannot be valid.
A sequent of the form ' ⊢ α, β ', for logical formulas α and β, means that either α is true or β is true (or both). But it does not mean that either α is a tautology or β is a tautology. To clarify this, consider the example ' ⊢ B ∨ A, C ∨ ¬A '. This is a valid sequent because either B ∨ A is true or C ∨ ¬A is true. But neither of these expressions is a tautology in isolation. It is the disjunction of these two expressions which is a tautology.
Similarly, a sequent of the form ' α, β ⊢ ', for logical formulas α and β, means that either α is false or β is false. But it does not mean that either α is a contradiction or β is a contradiction. To clarify this, consider the example ' B ∧ A, C ∧ ¬A ⊢ '. This is a valid sequent because either B ∧ A is false or C ∧ ¬A is false. But neither of these expressions is a contradiction in isolation. It is the conjunction of these two expressions which is a contradiction.
Most proof systems provide ways to deduce one sequent from another. These inference rules are written with a list of sequents above and below a line. This rule indicates that if everything above the line is true, so is everything under the line.
A typical rule is:
\Gamma,\alpha\vdash\Sigma \Gamma\vdash\alpha | |
\Gamma\vdash\Sigma |
This indicates that if we can deduce that
\Gamma,\alpha
\Sigma
\Gamma
\alpha
\Gamma
\Sigma
The assertion symbol in sequents originally meant exactly the same as the implication operator. But over time, its meaning has changed to signify provability within a theory rather than semantic truth in all models.
In 1934, Gentzen did not define the assertion symbol ' ⊢ ' in a sequent to signify provability. He defined it to mean exactly the same as the implication operator ' ⇒ '. Using ' → ' instead of ' ⊢ ' and ' ⊃ ' instead of ' ⇒ ', he wrote: "The sequent A1, ..., Aμ → B1, ..., Bν signifies, as regards content, exactly the same as the formula (A1 & ... & Aμ) ⊃ (B1 ∨ ... ∨ Bν)".[4] (Gentzen employed the right-arrow symbol between the antecedents and consequents of sequents. He employed the symbol ' ⊃ ' for the logical implication operator.)
In 1939, Hilbert and Bernays stated likewise that a sequent has the same meaning as the corresponding implication formula.[5]
In 1944, Alonzo Church emphasized that Gentzen's sequent assertions did not signify provability.
"Employment of the deduction theorem as primitive or derived rule must not, however, be confused with the use of Sequenzen by Gentzen. For Gentzen's arrow, →, is not comparable to our syntactical notation, ⊢, but belongs to his object language (as is clear from the fact that expressions containing it appear as premisses and conclusions in applications of his rules of inference)."[6]
Numerous publications after this time have stated that the assertion symbol in sequents does signify provability within the theory where the sequents are formulated. Curry in 1963, Lemmon in 1965, and Huth and Ryan in 2004 all state that the sequent assertion symbol signifies provability. However, states that the assertion symbol in Gentzen-system sequents, which he denotes as ' ⇒ ', is part of the object language, not the metalanguage.[7]
According to Prawitz (1965): "The calculi of sequents can be understood as meta-calculi for the deducibility relation in the corresponding systems of natural deduction."[8] And furthermore: "A proof in a calculus of sequents can be looked upon as an instruction on how to construct a corresponding natural deduction."[9] In other words, the assertion symbol is part of the object language for the sequent calculus, which is a kind of meta-calculus, but simultaneously signifies deducibility in an underlying natural deduction system.
A sequent is a formalized statement of provability that is frequently used when specifying calculi for deduction. In the sequent calculus, the name sequent is used for the construct, which can be regarded as a specific kind of judgment, characteristic to this deduction system.
The intuitive meaning of the sequent
\Gamma\vdash\Sigma
\Gamma\vdash
\vdash\Sigma
Of course, other intuitive explanations are possible, which are classically equivalent. For example,
\Gamma\vdash\Sigma
In any case, these intuitive readings are only pedagogical. Since formal proofs in proof theory are purely syntactic, the meaning of (the derivation of) a sequent is only given by the properties of the calculus that provides the actual rules of inference.
Barring any contradictions in the technically precise definition above we can describe sequents in their introductory logical form.
\Gamma
\Sigma
\Sigma
\vdash
The general notion of sequent introduced here can be specialized in various ways. A sequent is said to be an intuitionistic sequent if there is at most one formula in the succedent (although multi-succedent calculi for intuitionistic logic are also possible). More precisely, the restriction of the general sequent calculus to single-succedent-formula sequents, with the same inference rules as for general sequents, constitutes an intuitionistic sequent calculus. (This restricted sequent calculus is denoted LJ.)
Similarly, one can obtain calculi for dual-intuitionistic logic (a type of paraconsistent logic) by requiring that sequents be singular in the antecedent.
In many cases, sequents are also assumed to consist of multisets or sets instead of sequences. Thus one disregards the order or even the numbers of occurrences of the formulae. For classical propositional logic this does not yield a problem, since the conclusions that one can draw from a collection of premises do not depend on these data. In substructural logic, however, this may become quite important.
Natural deduction systems use single-consequence conditional assertions, but they typically do not use the same sets of inference rules as Gentzen introduced in 1934. In particular, tabular natural deduction systems, which are very convenient for practical theorem-proving in propositional calculus and predicate calculus, were applied by and for teaching introductory logic in textbooks.
Historically, sequents have been introduced by Gerhard Gentzen in order to specify his famous sequent calculus.[10] In his German publication he used the word "Sequenz". However, in English, the word "sequence" is already used as a translation to the German "Folge" and appears quite frequently in mathematics. The term "sequent" then has been created in search for an alternative translation of the German expression.
Kleene makes the following comment on the translation into English: "Gentzen says 'Sequenz', which we translate as 'sequent', because we have already used 'sequence' for any succession of objects, where the German is 'Folge'."
2.4. Die Sequenz A1, ..., Aμ → B1, ..., Bν bedeutet inhaltlich genau dasselbe wie die Formel
(A1 & ... & Aμ) ⊃ (B1 ∨ ... ∨ Bν).
Für die inhaltliche Deutung ist eine Sequenz
A1, ..., Ar → B1, ..., Bs,
worin die Anzahlen r und s von 0 verschieden sind, gleichbedeutend mit der Implikation
(A1 & ... & Ar) → (B1 ∨ ... ∨ Bs)
"Intuitively, a sequent represents 'provable from' in the sense that the formulas in U are assumptions for the set of formulas V that are to be proved. The symbol ⇒ is similar to the symbol ⊢ in Hilbert systems, except that ⇒ is part of the object language of the deductive system being formalized, while ⊢ is a metalanguage notation used to reason about deductive systems."