The sequence between semantic related ordered words is classified as a lexical chain.[1] A lexical chain is a sequence of related words in writing, spanning narrow (adjacent words or sentences) or wide context window (entire text). A lexical chain is independent of the grammatical structure of the text and in effect it is a list of words that captures a portion of the cohesive structure of the text. A lexical chain can provide a context for the resolution of an ambiguous term and enable disambiguation of concepts that the term represents.
Morris and Hirst introduce the term lexical chain as an expansion of lexical cohesion.[2] A text in which many of its sentences are semantically connected often produces a certain degree of continuity in its ideas, providing good cohesion among its sentences. The definition used for lexical cohesion states that coherence is a result of cohesion, not the other way around.[3] Cohesion is related to a set of words that belong together because of abstract or concrete relation. Coherence, on the other hand, is concerned with the actual meaning in the whole text.
Morris and Hirst define that lexical chains make use of semantic context for interpreting words, concepts, and sentences. In contrast, lexical cohesion is more focused on the relationships of word pairs. Lexical chains extend this notion to a serial number of adjacent words. There are two main reasons why lexical chains are essential:
The method presented by Morris and Hirst is the first to bring the concept of lexical cohesion to computer systems via lexical chains. Using their intuition, they identify lexical chains in text documents and built their structure considering Halliday and Hassan's observations. For this task, they considered five text documents, totaling 183 sentences from different and non-specific sources. Repetitive words (e.g., high-frequency words, pronouns, propositions, verbal auxiliaries) were not considered as prospective chain elements since they do not bring much semantic value to the structure themselves.
Lexical chains are built according to a series of relationships between words in a text document. In the seminal work of Morris and Hirst they consider an external thesaurus (Roget's Thesaurus) as their lexical database to extract these relations. A lexical chain is formed by a sequence of words
w1,w2,...c,wn
wi,wi+1
The use of lexical chains in natural language processing tasks (e.g., text similarity, word sense disambiguation, document clustering) has been widely studied in the literature. Barzilay et al [5] use lexical chains to produce summaries from texts. They propose a technique based on four steps: segmentation of original text, construction of lexical chains, identification of reliable chains, and extraction of significant sentences. Silber and McCoy[6] also investigates text summarization, but their approach for constructing the lexical chains runs in linear time.
Some authors use WordNet[7] [8] to improve the search and evaluation of lexical chains. Budanitsky and Kirst[9] [10] compare several measurements of semantic distance and relatedness using lexical chains in conjunction with WordNet. Their study concludes that the similarity measure of Jiang and Conrath[11] presents the best overall result. Moldovan and Adrian[12] study the use of lexical chains for finding topically related words for question answering systems. This is done considering the glosses for each synset in WordNet. According to their findings, topical relations via lexical chains improve the performance of question answering systems when combined with WordNet. McCarthy et al.[13] present a methodology to categorize and find the most predominant synsets in unlabeled texts using WordNet. Different from traditional approaches (e.g., BOW), they consider relationships between terms not occurring explicitly. Ercan and Cicekli[14] explore the effects of lexical chains in the keyword extraction task through a supervised machine learning perspective. In Wei et al.[15] combine lexical chains and WordNet to extract a set of semantically related words from texts and use them for clustering. Their approach uses an ontological hierarchical structure to provide a more accurate assessment of similarity between terms during the word sense disambiguation task.
Even though the applicability of lexical chains is diverse, there is little work exploring them with recent advances in NLP, more specifically with word embeddings. In,[16] lexical chains are built using specific patterns found on WordNet and used for learning word embeddings. Their resulting vectors, are validated in the document similarity task. Gonzales et al. [17] use word-sense embeddings to produce lexical chains that are integrated with a neural machine translation model. Mascarelli[18] proposes a model that uses lexical chains to leverage statistical machine translation by using a document encoder. Instead of using an external lexical database, they use word embeddings to detect the lexical chains in the source text.
Ruas et al. propose two techniques that combine lexical databases, lexical chains, and word embeddings, namely Flexible Lexical Chain II (FLLC II) and Fixed Lexical Chain II (FXLC II). The main goal of both FLLC II and FXLC II is to represent a collection of words by their semantic values more concisely. In FLLC II, the lexical chains are assembled dynamically according to the semantic content for each term evaluated and the relationship with its adjacent neighbors. As long as there is a semantic relation that connects two or more words, they should be combined into a unique concept. The semantic relationship is obtained through WordNet, which works a ground truth to indicate which lexical structure connects two words (e.g., hypernyms, hyponyms, meronyms). If a word without any semantic affinity with the current chain presents itself, a new lexical chain is initialized. On the other hand, FXLC II breaks text segments into pre-defined chunks, with a specific number of words each. Different from FLLC II, the FXLC II technique groups a certain amount of words into the same structure, regardless of the semantic relatedness expressed in the lexical database. In both methods, each formed chain is represented by the word whose pre-trained word embedding vector is most similar to the average vector of the constituent words in that same chain.