In logic and mathematics, second-order logic is an extension of first-order logic, which itself is an extension of propositional logic.[1] Second-order logic is in turn extended by higher-order logic and type theory.
\forallP\forallx(Px\lor\negPx)
First-order logic can quantify over individuals, but not over properties. That is, we can take an atomic sentence like Cube(b) and obtain a quantified sentence by replacing the name with a variable and attaching a quantifier:[2]
However, we cannot do the same with the predicate. That is, the following expression:
is not a sentence of first-order logic, but this is a legitimate sentence of second-order logic. Here, P is a predicate variable and is semantically a set of individuals.[2]
As a result, second-order logic has greater expressive power than first-order logic. For example, there is no way in first-order logic to identify the set of all cubes and tetrahedrons. But the existence of this set can be asserted in second-order logic as:
We can then assert properties of this set. For instance, the following says that the set of all cubes and tetrahedrons does not contain any dodecahedrons:
Second-order quantification is especially useful because it gives the ability to express reachability properties. For example, if Parent(x, y) denotes that x is a parent of y, then first-order logic cannot express the property that x is an ancestor of y. In second-order logic we can express this by saying that every set of people containing y and closed under the Parent relation contains x:
It is notable that while we have variables for predicates in second-order-logic, we don't have variables for properties of predicates. We cannot say, for example, that there is a property Shape(P) that is true for the predicates P Cube, Tet, and Dodec. This would require third-order logic.
The syntax of second-order logic tells which expressions are well formed formulas. In addition to the syntax of first-order logic, second-order logic includes many new sorts (sometimes called types) of variables. These are:
Each of the variables just defined may be universally and/or existentially quantified over, to build up formulas. Thus there are many kinds of quantifiers, two for each sort of variables. A sentence in second-order logic, as in first-order logic, is a well-formed formula with no free variables (of any sort).
It's possible to forgo the introduction of function variables in the definition given above (and some authors do this) because an n-ary function variable can be represented by a relation variable of arity n+1 and an appropriate formula for the uniqueness of the "result" in the n+1 argument of the relation. (Shapiro 2000, p. 63)
Monadic second-order logic (MSO) is a restriction of second-order logic in which only quantification over unary relations (i.e. sets) is allowed. Quantification over functions, owing to the equivalence to relations as described above, is thus also not allowed. The second-order logic without these restrictions is sometimes called full second-order logic to distinguish it from the monadic version. Monadic second-order logic is particularly used in the context of Courcelle's theorem, an algorithmic meta-theorem in graph theory. The MSO theory of the complete infinite binary tree (S2S) is decidable. By contrast, full second order logic over any infinite set (or MSO logic over for example (
N
Just as in first-order logic, second-order logic may include non-logical symbols in a particular second-order language. These are restricted, however, in that all terms that they form must be either first-order terms (which can be substituted for a first-order variable) or second-order terms (which can be substituted for a second-order variable of an appropriate sort).
A formula in second-order logic is said to be of first-order (and sometimes denoted
1 | |
\Sigma | |
0 |
1 | |
\Pi | |
0 |
1 | |
\Sigma | |
1 |
\existsR0\ldots\existsRm\phi
\phi
1 | |
\Sigma | |
1 |
1 | |
\Pi | |
1 |
1 | |
\Sigma | |
k+1 |
\existsR0\ldots\existsRm\phi
\phi
1 | |
\Pi | |
k |
1 | |
\Pi | |
k+1 |
\forallR0\ldots\forallRm\phi
\phi
1 | |
\Sigma | |
k |
The semantics of second-order logic establish the meaning of each sentence. Unlike first-order logic, which has only one standard semantics, there are two different semantics that are commonly used for second-order logic: standard semantics and Henkin semantics. In each of these semantics, the interpretations of the first-order quantifiers and the logical connectives are the same as in first-order logic. Only the ranges of quantifiers over second-order variables differ in the two types of semantics (Väänänen 2001).
In standard semantics, also called full semantics, the quantifiers range over all sets or functions of the appropriate sort. A model with this condition is called a full model, and these are the same as models in which the range of the second-order quantifiers is the powerset of the model's first-order part. (2001) Thus once the domain of the first-order variables is established, the meaning of the remaining quantifiers is fixed. It is these semantics that give second-order logic its expressive power, and they will be assumed for the remainder of this article.
Leon Henkin (1950) defined an alternative kind of semantics for second-order and higher-order theories, in which the meaning of the higher-order domains is partly determined by an explicit axiomatisation, drawing on type theory, of the properties of the sets or functions ranged over. Henkin semantics is a kind of many-sorted first-order semantics, where there are a class of models of the axioms, instead of the semantics being fixed to just the standard model as in the standard semantics. A model in Henkin semantics will provide a set of sets or set of functions as the interpretation of higher-order domains, which may be a proper subset of all sets or functions of that sort. For his axiomatisation, Henkin proved that Gödel's completeness theorem and compactness theorem, which hold for first-order logic, carry over to second-order logic with Henkin semantics. Since also the Skolem–Löwenheim theorems hold for Henkin semantics, Lindström's theorem imports that Henkin models are just disguised first-order models.[3]
For theories such as second-order arithmetic, the existence of non-standard interpretations of higher-order domains isn't just a deficiency of the particular axiomatisation derived from type theory that Henkin used, but a necessary consequence of Gödel's incompleteness theorem: Henkin's axioms can't be supplemented further to ensure the standard interpretation is the only possible model. Henkin semantics are commonly used in the study of second-order arithmetic.
Jouko Väänänen (2001) argued that the distinction between Henkin semantics and full semantics for second-order logic is analogous to the distinction between provability in ZFC and truth in V, in that the former obeys model-theoretic properties like the Lowenheim-Skolem theorem and compactness, and the latter has categoricity phenomena. For example, "we cannot meaningfully ask whether the
V
ZFC
V
ZFC
ZFC
ZFC
Second-order logic is more expressive than first-order logic. For example, if the domain is the set of all real numbers, one can assert in first-order logic the existence of an additive inverse of each real number by writing ∀x ∃y (x + y = 0) but one needs second-order logic to assert the least-upper-bound property for sets of real numbers, which states that every bounded, nonempty set of real numbers has a supremum. If the domain is the set of all real numbers, the following second-order sentence (split over two lines) expresses the least upper bound property:
This formula is a direct formalization of "every, set A ." It can be shown that any ordered field that satisfies this property is isomorphic to the real number field. On the other hand, the set of first-order sentences valid in the reals has arbitrarily large models due to the compactness theorem. Thus the least-upper-bound property cannot be expressed by any set of sentences in first-order logic. (In fact, every real-closed field satisfies the same first-order sentences in the signature
\langle+, ⋅ ,\le\rangle
In second-order logic, it is possible to write formal sentences that say "the domain is finite" or "the domain is of countable cardinality." To say that the domain is finite, use the sentence that says that every surjective function from the domain to itself is injective. To say that the domain has countable cardinality, use the sentence that says that there is a bijection between every two infinite subsets of the domain. It follows from the compactness theorem and the upward Löwenheim–Skolem theorem that it is not possible to characterize finiteness or countability, respectively, in first-order logic.
Certain fragments of second-order logic like ESO are also more expressive than first-order logic even though they are strictly less expressive than the full second-order logic. ESO also enjoys translation equivalence with some extensions of first-order logic that allow non-linear ordering of quantifier dependencies, like first-order logic extended with Henkin quantifiers, Hintikka and Sandu's independence-friendly logic, and Väänänen's dependence logic.
A deductive system for a logic is a set of inference rules and logical axioms that determine which sequences of formulas constitute valid proofs. Several deductive systems can be used for second-order logic, although none can be complete for the standard semantics (see below). Each of these systems is sound, which means any sentence they can be used to prove is logically valid in the appropriate semantics.
The weakest deductive system that can be used consists of a standard deductive system for first-order logic (such as natural deduction) augmented with substitution rules for second-order terms.[4] This deductive system is commonly used in the study of second-order arithmetic.
The deductive systems considered by Shapiro (1991) and Henkin (1950) add to the augmented first-order deductive scheme both comprehension axioms and choice axioms. These axioms are sound for standard second-order semantics. They are sound for Henkin semantics restricted to Henkin models satisfying the comprehension and choice axioms.[5]
One might attempt to reduce the second-order theory of the real numbers, with full second-order semantics, to the first-order theory in the following way. First expand the domain from the set of all real numbers to a two-sorted domain, with the second sort containing all sets of real numbers. Add a new binary predicate to the language: the membership relation. Then sentences that were second-order become first-order, with the formerly second-order quantifiers ranging over the second sort instead. This reduction can be attempted in a one-sorted theory by adding unary predicates that tell whether an element is a number or a set, and taking the domain to be the union of the set of real numbers and the power set of the real numbers.
But notice that the domain was asserted to include all sets of real numbers. That requirement cannot be reduced to a first-order sentence, as the Löwenheim–Skolem theorem shows. That theorem implies that there is some countably infinite subset of the real numbers, whose members we will call internal numbers, and some countably infinite collection of sets of internal numbers, whose members we will call "internal sets", such that the domain consisting of internal numbers and internal sets satisfies exactly the same first-order sentences as are satisfied by the domain of real numbers and sets of real numbers. In particular, it satisfies a sort of least-upper-bound axiom that says, in effect:
Countability of the set of all internal numbers (in conjunction with the fact that those form a densely ordered set) implies that that set does not satisfy the full least-upper-bound axiom. Countability of the set of all internal sets implies that it is not the set of all subsets of the set of all internal numbers (since Cantor's theorem implies that the set of all subsets of a countably infinite set is an uncountably infinite set). This construction is closely related to Skolem's paradox.
Thus the first-order theory of real numbers and sets of real numbers has many models, some of which are countable. The second-order theory of the real numbers has only one model, however.This follows from the classical theorem that there is only one Archimedean complete ordered field, along with the fact that all the axioms of an Archimedean complete ordered field are expressible in second-order logic. This shows that the second-order theory of the real numbers cannot be reduced to a first-order theory, in the sense that the second-order theory of the real numbers has only one model but the corresponding first-order theory has many models.
There are more extreme examples showing that second-order logic with standard semantics is more expressive than first-order logic. There is a finite second-order theory whose only model is the real numbers if the continuum hypothesis holds and that has no model if the continuum hypothesis does not hold (cf. Shapiro 2000, p. 105). This theory consists of a finite theory characterizing the real numbers as a complete Archimedean ordered field plus an axiom saying that the domain is of the first uncountable cardinality. This example illustrates that the question of whether a sentence in second-order logic is consistent is extremely subtle.
Additional limitations of second-order logic are described in the next section.
It is a corollary of Gödel's incompleteness theorem that there is no deductive system (that is, no notion of provability) for second-order formulas that simultaneously satisfies these three desired attributes:[6]
This corollary is sometimes expressed by saying that second-order logic does not admit a complete proof theory. In this respect second-order logic with standard semantics differs from first-order logic; Quine (1970, pp. 90–91) pointed to the lack of a complete proof system as a reason for thinking of second-order logic as not logic, properly speaking.
As mentioned above, Henkin proved that the standard deductive system for first-order logic is sound, complete, and effective for second-order logic with Henkin semantics, and the deductive system with comprehension and choice principles is sound, complete, and effective for Henkin semantics using only models that satisfy these principles.
The compactness theorem and the Löwenheim–Skolem theorem do not hold for full models of second-order logic. They do hold however for Henkin models.[7]
Predicate logic was introduced to the mathematical community by C. S. Peirce, who coined the term second-order logic and whose notation is most similar to the modern form (Putnam 1982). However, today most students of logic are more familiar with the works of Frege, who published his work several years prior to Peirce but whose works remained less known until Bertrand Russell and Alfred North Whitehead made them famous. Frege used different variables to distinguish quantification over objects from quantification over properties and sets; but he did not see himself as doing two different kinds of logic. After the discovery of Russell's paradox it was realized that something was wrong with his system. Eventually logicians found that restricting Frege's logic in various ways—to what is now called first-order logic—eliminated this problem: sets and properties cannot be quantified over in first-order logic alone. The now-standard hierarchy of orders of logics dates from this time.
It was found that set theory could be formulated as an axiomatized system within the apparatus of first-order logic (at the cost of several kinds of completeness, but nothing so bad as Russell's paradox), and this was done (see Zermelo–Fraenkel set theory), as sets are vital for mathematics. Arithmetic, mereology, and a variety of other powerful logical theories could be formulated axiomatically without appeal to any more logical apparatus than first-order quantification, and this, along with Gödel and Skolem's adherence to first-order logic, led to a general decline in work in second (or any higher) order logic.
This rejection was actively advanced by some logicians, most notably W. V. Quine. Quine advanced the view that in predicate-language sentences like Fx the "x" is to be thought of as a variable or name denoting an object and hence can be quantified over, as in "For all things, it is the case that . . ." but the "F" is to be thought of as an abbreviation for an incomplete sentence, not the name of an object (not even of an abstract object like a property). For example, it might mean " . . . is a dog." But it makes no sense to think we can quantify over something like this. (Such a position is quite consistent with Frege's own arguments on the concept-object distinction). So to use a predicate as a variable is to have it occupy the place of a name, which only individual variables should occupy. This reasoning has been rejected by George Boolos.
In recent years second-order logic has made something of a recovery, buoyed by Boolos' interpretation of second-order quantification as plural quantification over the same domain of objects as first-order quantification (Boolos 1984). Boolos furthermore points to the claimed nonfirstorderizability of sentences such as "Some critics admire only each other" and "Some of Fianchetto's men went into the warehouse unaccompanied by anyone else", which he argues can only be expressed by the full force of second-order quantification. However, generalized quantification and partially ordered (or branching) quantification may suffice to express a certain class of purportedly nonfirstorderizable sentences as well and these do not appeal to second-order quantification.
The expressive power of various forms of second-order logic on finite structures is intimately tied to computational complexity theory. The field of descriptive complexity studies which computational complexity classes can be characterized by the power of the logic needed to express languages (sets of finite strings) in them. A string w = w1···wn in a finite alphabet A can be represented by a finite structure with domain D = , unary predicates Pa for each a ∈ A, satisfied by those indices i such that wi = a, and additional predicates that serve to uniquely identify which index is which (typically, one takes the graph of the successor function on D or the order relation <, possibly with other arithmetic predicates). Conversely, the Cayley tables of any finite structure (over a finite signature) can be encoded by a finite string.
This identification leads to the following characterizations of variants of second-order logic over finite structures:
Relationships among these classes directly impact the relative expressiveness of the logics over finite structures; for example, if PH = PSPACE, then adding a transitive closure operator to second-order logic would not make it any more expressive over finite structures.