A formal grammar describes which strings from an alphabet of a formal language are valid according to the language's syntax. A grammar does not describe the meaning of the strings or what can be done with them in whatever context—only their form. A formal grammar is defined as a set of production rules for such strings in a formal language.
Formal language theory, the discipline that studies formal grammars and languages, is a branch of applied mathematics. Its applications are found in theoretical computer science, theoretical linguistics, formal semantics, mathematical logic, and other areas.
A formal grammar is a set of rules for rewriting strings, along with a "start symbol" from which rewriting starts. Therefore, a grammar is usually thought of as a language generator. However, it can also sometimes be used as the basis for a "recognizer" - a function in computing that determines whether a given string belongs to the language or is grammatically incorrect. To describe such recognizers, formal language theory uses separate formalisms, known as automata theory. One of the interesting results of automata theory is that it is not possible to design a recognizer for certain formal languages.[1] Parsing is the process of recognizing an utterance (a string in natural languages) by breaking it down to a set of symbols and analyzing each one against the grammar of the language. Most languages have the meanings of their utterances structured according to their syntax - a practice known as compositional semantics. As a result, the first step to describing the meaning of an utterance in language is to break it down part by part and look at its analyzed form (known as its parse tree in computer science, and as its deep structure in generative grammar).
A grammar mainly consists of a set of production rules, rewriting rules for transforming strings. Each rule specifies a replacement of a particular string (its left-hand side) with another (its right-hand side). A rule can be applied to each string that contains its left-hand side and produces a string in which an occurrence of that left-hand side has been replaced with its right-hand side.
Unlike a semi-Thue system, which is wholly defined by these rules, a grammar further distinguishes between two kinds of symbols: nonterminal and terminal symbols; each left-hand side must contain at least one nonterminal symbol. It also distinguishes a special nonterminal symbol, called the start symbol.
The language generated by the grammar is defined to be the set of all strings without any nonterminal symbols that can be generated from the string consisting of a single start symbol by (possibly repeated) application of its rules in whatever way possible.If there are essentially different ways of generating the same single string, the grammar is said to be ambiguous.
In the following examples, the terminal symbols are a and b, and the start symbol is S.
Suppose we have the following production rules:
1.
S → aSb
2.
S → ba
then we start with S, and can choose a rule to apply to it. If we choose rule 1, we obtain the string aSb. If we then choose rule 1 again, we replace S with aSb and obtain the string aaSbb. If we now choose rule 2, we replace S with ba and obtain the string , and are done. We can write this series of choices more briefly, using symbols:
S ⇒ aSb ⇒ aaSbb ⇒ aababb
The language of the grammar is the infinite set
\{anbabn\midn\geq0\}=\{ba,abab,aababb,aaababbb,...c\}
ak
a
k
n
Suppose the rules are these instead:
1.
S → a
2.
S → SS
3.
aSa → b
This grammar is not context-free due to rule 3 and it is ambiguous due to the multiple ways in which rule 2 can be used to generate sequences of
S
However, the language it generates is simply the set of all nonempty strings consisting of
a
b
b
S
SSS
b
S
a
b
That same language can alternatively be generated by a context-free, nonambiguous grammar; for instance, the regular grammar with rules
1.
S → aS
2.
S → bS
3.
S → a
4.
S → b
See main article: Unrestricted grammar.
In the classic formalization of generative grammars first proposed by Noam Chomsky in the 1950s,[2] [3] a grammar G consists of the following components:
\Sigma
(\Sigma\cupN)*N(\Sigma\cupN)* → (\Sigma\cupN)*
where
{*}
\cup
Λ
\epsilon
S\inN
(N,\Sigma,P,S)
The operation of a grammar can be defined in terms of relations on strings:
G=(N,\Sigma,P,S)
\undersetG ⇒
(\Sigma\cupN)*
x\undersetG ⇒ y\iff\existsu,v,p,q\in(\Sigma\cupN)*:(x=upv)\wedge(p → q\inP)\wedge(y=uqv)
\overset*{\undersetG ⇒ }
\undersetG ⇒
(\Sigma\cupN)*
S
\left\{w\in(\Sigma\cupN)*\midS\overset*{\undersetG ⇒ }w\right\}
\Sigma*
G
\boldsymbol{L}(G)
G
The grammar
G=(N,\Sigma,P,S)
(N\cup\Sigma,P)
S
For these examples, formal languages are specified using set-builder notation.
Consider the grammar
G
N=\left\{S,B\right\}
\Sigma=\left\{a,b,c\right\}
S
P
1.
S → aBSc
2.
S → abc
3.
Ba → aB
4.
Bb → bb
This grammar defines the language
L(G)=\left\{anbncn\midn\ge1\right\}
an
a
a
b
c
Some examples of the derivation of strings in
L(G)
(On notation:
P\underseti ⇒ Q
When Noam Chomsky first formalized generative grammars in 1956,[2] he classified them into types now known as the Chomsky hierarchy. The difference between these types is that they have increasingly strict production rules and can therefore express fewer formal languages. Two important types are context-free grammars (Type 2) and regular grammars (Type 3). The languages that can be described with such a grammar are called context-free languages and regular languages, respectively. Although much less powerful than unrestricted grammars (Type 0), which can in fact express any language that can be accepted by a Turing machine, these two restricted types of grammars are most often used because parsers for them can be efficiently implemented.[7] For example, all regular languages can be recognized by a finite-state machine, and for useful subsets of context-free grammars there are well-known algorithms to generate efficient LL parsers and LR parsers to recognize the corresponding languages those grammars generate.
A context-free grammar is a grammar in which the left-hand side of each production rule consists of only a single nonterminal symbol. This restriction is non-trivial; not all languages can be generated by context-free grammars. Those that can are called context-free languages.
The language
L(G)=\left\{anbncn\midn\ge1\right\}
\left\{anbn\midn\ge1\right\}
a
b
G2
N=\left\{S\right\}
\Sigma=\left\{a,b\right\}
S
1.
S → aSb
2.
S → ab
A context-free language can be recognized in
O(n3)
O(n3)
n
In regular grammars, the left hand side is again only a single nonterminal symbol, but now the right-hand side is also restricted. The right side may be the empty string, or a single terminal symbol, or a single terminal symbol followed by a nonterminal symbol, but nothing else. (Sometimes a broader definition is used: one can allow longer strings of terminals or single nonterminals without anything else, making languages easier to denote while still defining the same class of languages.)
The language
\left\{anbn\midn\ge1\right\}
\left\{anbm\midm,n\ge1\right\}
a
b
G3
N=\left\{S,A,B\right\}
\Sigma=\left\{a,b\right\}
S
S → aA
A → aA
A → bB
B → bB
B → \epsilon
All languages generated by a regular grammar can be recognized in
O(n)
Many extensions and variations on Chomsky's original hierarchy of formal grammars have been developed, both by linguists and by computer scientists, usually either in order to increase their expressive power or in order to make them easier to analyze or parse. Some forms of grammars developed include:
A recursive grammar is a grammar that contains production rules that are recursive. For example, a grammar for a context-free language is left-recursive if there exists a non-terminal symbol A that can be put through the production rules to produce a string with A as the leftmost symbol.[14] An example of recursive grammar is a clause within a sentence separated by two commas.[15] All types of grammars in the Chomsky hierarchy can be recursive.
Though there is a tremendous body of literature on parsing algorithms, most of these algorithms assume that the language to be parsed is initially described by means of a generative formal grammar, and that the goal is to transform this generative grammar into a working parser. Strictly speaking, a generative grammar does not in any way correspond to the algorithm used to parse a language, and various algorithms have different restrictions on the form of production rules that are considered well-formed.
An alternative approach is to formalize the language in terms of an analytic grammar in the first place, which more directly corresponds to the structure and semantics of a parser for the language. Examples of analytic grammar formalisms include the following: