ID/LP Grammars are a subset of Phrase Structure Grammars, differentiated from other formal grammars by distinguishing between immediate dominance (ID) and linear precedence (LP) constraints. Whereas traditional phrase structure rules incorporate dominance and precedence into a single rule, ID/LP Grammars maintains separate rule sets which need not be processed simultaneously. ID/LP Grammars are used in Computational Linguistics.
For example, a typical phrase structure rule such as
NP\precVP
The idea first came to prominence as part of Generalized Phrase Structure Grammar;[1] [2] the ID/LP Grammar approach is also used in head-driven phrase structure grammar,[3] lexical functional grammar, and other unification grammars.
Current work in the Minimalist Program also attempts to distinguish between dominance and ordering. For instance, recent papers by Noam Chomsky have proposed that, while hierarchical structure is the result of the syntactic structure-building operation Merge, linear order is not determined by this operation, and is simply the result of externalization (oral pronunciation, or, in the case of sign language, manual signing).[4] [5] [6]
Immediate dominance is the asymmetrical relationship between the mother node of a parse tree and its daughters, where the mother node (to the left of the arrow) is said to immediately dominate the daughter nodes (those to the right of the arrow), but the daughters do not immediately dominate the mother. The daughter nodes are also dominated by any node that immediately dominates the mother node, however this is not an immediate dominance relation.
For example, the context free rule
A → B C D
Linear precedence is the order relationship of sister nodes. LP constraints specify in what order sister nodes under the same mother can appear. Nodes that surface earlier in strings precede their sisters. LP can be shown in phrase structure rules in the form
A → B C D
A rule that has ID constraints but not LP is written with commas between the daughter nodes, for example
A → B, C, D
Alternatively, these relationships can be expressed through linear precedence statements, such as
B\precC
The principle of transitivity can be applied to LP relations which means that if
B\precC
C\precD
B\precD
For a string to be grammatical in an ID/LP Grammar, it must belong to a local subtree that follows at least one ID rule and all LP statements of the grammar. If every possible string generated by the grammar fits this criterion, than it is an ID/LP Grammar. Additionally, for a grammar to be able to be written in ID/LP format, it must have the property of Exhaustive Constant Partial Ordering (ECPO): namely that at least part of the ID/LP relations in one rule is observed in all other rules. For example, the set of rules:
(1)A → B C D
(2)does not have the ECPO property, because (1) says that C must always precede D, while (2) says that D must always precede C.B → D C A
Since LP statements apply regardless of the ID rule context, they allow us to make generalizations across the whole grammar. For example, given the LP statement
V\precDP
Lucy won the race.
Ava told Sara to read a book.
This can be generalized into a rule that holds across English,
X\precYP
Separating LP requirements from ID rules also accounts for the phenomena of free word order in natural language. For example, in English it is possible to put adverbs before or after a verb and have both strings be grammatical.
John suddenly screamed.A traditional PS rule would require two separate rules, but this can be described by the single ID/LP ruleJohn screamed suddenly.
VP → V,AdvP
Two parsing algorithms used to parse ID/LP Grammars are the Earley Parser and Shieber's algorithm.[9]
ID and LP rules impose constraints on sentence strings; when dealing with large strings, the constraints of these rules can cause the parsed string to become infinite, thus making parsing difficult. The Earley Parser solves this by altering[10] the format of an ID/LP Grammar into a Context Free Grammar (CFG), dividing the ID/LP Grammar into an Ordered Context Free Grammar (CFG) and Unordered Context Free Grammar (UCFG). This allows the two algorithms to parse strings more efficiently; in particular, the Earley Parser uses a point tracking method that follows a linear path established by the LP rules. In a CFG, LP rules do not allow repeated constituents in the parsed string, but an UCFG allows repeated constituents within the parsed strings. If the ID/LP Grammar is converted to a UCFG then the LP rules do not dominate during the parsing process, however, it still follows the point tracking method.
After the ID/LP Grammar has been converted to the equivalent form within a CFG, the algorithm will analyze the string. Let
\Chi
\alpha,\beta,\iota
(1)
\Chi\longrightarrow ⋅ \alpha\beta\iota
(2)
\Chi\longrightarrow\alpha ⋅ \beta\iota
\beta
(3)
\Chi\longrightarrow\alpha\beta\iota ⋅
The parsed strings are then used together to form a Parse List for example:
\Iota0\ldots\Iotan
which the list will help determine whether the completed production element (
\Chi\longrightarrow\alpha\beta\iota
The UCFG is the appropriate equivalent to convert the ID/LP Grammar into in order to use the Earley Parser. This algorithm read strings similarly to how it parses CFG, however, in this case the order of elements is not enforced; resulting in lack of LP rule enforcement. This allows some elements to be repeated within the parsed strings and the UCFG accepts empty multi-sets along with filled multi-sets within its strings. For example:
(1)
\Chi\longrightarrow\{\} ⋅ \{\alpha,\beta,\iota\}
(2)
\Chi\longrightarrow\{\alpha\} ⋅ \{\beta,\iota\}
\{\beta,\iota\}
(3)
\Chi\longrightarrow\{\alpha,\beta,\iota\} ⋅ \{\}
When parsing a string that contains a number of unordered elements, the Earley Parser treats it as a Permutation, Y!, and generates each string individually instead of using one string to represent the repeated parsed strings. Whenever the dot moves over one element, the algorithm begins to generate parsed strings of the elements on the right edge in random positions until there are no more elements in the right edge set. Let X0 represent the original string and X1 as the first parsed string; for example:
\Chi0:[\Chi\longrightarrow\{\} ⋅ \{t,v,w,z\},0]
\Chi1:[\Chi\longrightarrow\{t\} ⋅ \{v,w,z\},0]
string X1 will produce, 3! = 6, different parsed strings of the right edge set:
(1)
[\Chi\longrightarrowt.vwz,0]
[\Chi\longrightarrowt.zvw,0]
(2)
[\Chi\longrightarrowt.vzw,0]
[\Chi\longrightarrowt.wvz,0]
(3)
[\Chi\longrightarrowt.zwv,0]
[\Chi\longrightarrowt.wzv,0]
The Earley Parser applies each string to the individual rules of a grammar and this results in very large sets.The large sets is partly resulted in the conversion of ID/LP Grammar into an equivalent grammar, however, parsing the overall ID/LP Grammar is difficult to begin with.
The foundation of Shieber's Algorithm is based on the Earley Parser for CFG, however, it does not require the ID/LP Grammar to be converted into a different grammar in order to be parsed. ID rules can be parsed in a separate form, S →ID, from the LP rules, V < S. Shieber compared parsing of a CFG to an ordered ID/LP Grammar string and Barton compared the parsing of a UCFG to an unordered ID/LP Grammar string.
Parsing an ID/LP Grammar, directly, generates a set list that will determine whether the production of the string will be accepted or fail. The algorithm follows 6 Steps (the symbols used can also represents the Syntactic Categories):
[S,\Tau,\beta,0]
\Iota0
\Iota0
[\Zeta,λ,\theta,0]
[\Alpha,\alpha,\{Z\}\cup\beta,0]
\Iota0
\beta
Z\nprec\beta
\beta
Z\not\in\beta
[\Alpha,\alphaZ,\beta,0]
\Iota0
[\Alpha,\alpha,\{Z\}\cup\beta,0]
\Iota0
Z\nprec\beta
Z\not\in\beta
Z\longrightarrowIDλ
[Z,T,λ,0]
\Iotan
[A,\alpha,\{q\}\cup\beta,i]
\Iotan-1
q=qn
q\nprec\beta
q\not\in\beta
[A,\alphaq,\beta,i]
\Iotan
[Z,λ,\theta,i]
\Iotan
[A,\alpha\{Z\}\cup\beta,k]
\Iotai
Z\nprec\beta
Z\not\in\beta
[A,\alphaZ,\beta,k]
\Iotan
[A,\alpha,\{Z\}\cup\beta,i]
\Iotan
Z\nprec\beta
Z\not\in\beta
[Z,T,λ,n]
\Iotan
Steps 2-3 are repeated exhaustively until no more new items can be added and then continue on to Step 4. Steps 5-6 are also exhaustively repeated until no further new items can be added to the set list. The string will be accepted if a string behaves or resembles the production,
[S,\alpha,\theta,0]
\Iotan
S\longrightarrowID\{C,D,F\}
C\longrightarrowID\{c\}
D\longrightarrowID\{d\}
F\longrightarrowID\{f\}
C\precD
\Iota0 | [S,T,\{C,D,F\},0], [C,T,\{c\},0], [F,T,\{f\},0] | |
\Iota1 | [C,c,\emptyset,0], [S,C,\{D,F\},0], [D,T,\{d\},1], [F,T,\{f\},1] | |
\Iota2 | [F,f,\emptyset,1], [S,CF,\{D\},0], [D,T,\{d\},2] | |
\Iota3 | [D,d,\emptyset,2], [S,CDF,\emptyset,0] |
\Iota3,
[S,CDF,\emptyset,0],
[S[Cc][Dd][Ff]]
The Unordered ID/LP Grammar follow the above, 6 step algorithm, in order to parse the strings. The noticeable difference is in the productions of each set list; there is one string that represents the many individual strings within one list. The table below shows the set list of Table 1.0,
[S,T,\{C,D,F\},0],
\Iota0 | [S,T,C ⋅ d,f,\emptyset,0] | |
\Iota1 | [S,T ⋅ c,d,f,\emptyset,0] | |
\Iota2 | [S,T,C,d ⋅ F,\emptyset,0] | |
\Iota3 | [S,T,C,D,F ⋅ \emptyset,0] |
[S,T,C,D,F ⋅ \emptyset,0],
Notice how the Unordered version of ID/LP Grammar contains mush less strings than the UCFG; Shieber's Algorithm use one string to represent the multiple different strings for repeated elements. Both algorithms can parse Ordered Grammars equally, however, Shieber's Algorithm seems to be more efficient when parsing an Unordered Grammar.