Latent semantic analysis explained

Latent semantic analysis (LSA) is a technique in natural language processing, in particular distributional semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close in meaning will occur in similar pieces of text (the distributional hypothesis). A matrix containing word counts per document (rows represent unique words and columns represent each document) is constructed from a large piece of text and a mathematical technique called singular value decomposition (SVD) is used to reduce the number of rows while preserving the similarity structure among columns. Documents are then compared by cosine similarity between any two columns. Values close to 1 represent very similar documents while values close to 0 represent very dissimilar documents.[1]

An information retrieval technique using latent semantic structure was patented in 1988[2] by Scott Deerwester, Susan Dumais, George Furnas, Richard Harshman, Thomas Landauer, Karen Lochbaum and Lynn Streeter. In the context of its application to information retrieval, it is sometimes called latent semantic indexing (LSI).[3]

Overview

Occurrence matrix

LSA can use a document-term matrix which describes the occurrences of terms in documents; it is a sparse matrix whose rows correspond to terms and whose columns correspond to documents. A typical example of the weighting of the elements of the matrix is tf-idf (term frequency–inverse document frequency): the weight of an element of the matrix is proportional to the number of times the terms appear in each document, where rare terms are upweighted to reflect their relative importance.

This matrix is also common to standard semantic models, though it is not necessarily explicitly expressed as a matrix, since the mathematical properties of matrices are not always used.

Rank lowering

After the construction of the occurrence matrix, LSA finds a low-rank approximation[4] to the term-document matrix. There could be various reasons for these approximations:

The consequence of the rank lowering is that some dimensions are combined and depend on more than one term:

This mitigates the problem of identifying synonymy, as the rank lowering is expected to merge the dimensions associated with terms that have similar meanings. It also partially mitigates the problem with polysemy, since components of polysemous words that point in the "right" direction are added to the components of words that share a similar meaning. Conversely, components that point in other directions tend to either simply cancel out, or, at worst, to be smaller than components in the directions corresponding to the intended sense.

Derivation

Let

X

be a matrix where element

(i,j)

describes the occurrence of term

i

in document

j

(this can be, for example, the frequency).

X

will look like this:

\begin{matrix}&bf{d}j\\ &\downarrow

T
\\ bf{t}
i

& \begin{bmatrix}x1,1&...&x1,j&...&x1,n\\ \vdots&\ddots&\vdots&\ddots&\vdots\\ xi,1&...&xi,j&...&xi,n\\ \vdots&\ddots&\vdots&\ddots&\vdots\\ xm,1&...&xm,j&...&xm,n\\ \end{bmatrix} \end{matrix}

Now a row in this matrix will be a vector corresponding to a term, giving its relation to each document:

T
bf{t}
i

=\begin{bmatrix}xi,1&...&xi,j&...&xi,n\end{bmatrix}

Likewise, a column in this matrix will be a vector corresponding to a document, giving its relation to each term:

bf{d}j=\begin{bmatrix} x1,j\\ \vdots\\ xi,j\\ \vdots\\ xm,j\\ \end{bmatrix}

T
bf{t}
i

bf{t}p

between two term vectors gives the correlation between the terms over the set of documents. The matrix product

XXT

contains all these dot products. Element

(i,p)

(which is equal to element

(p,i)

) contains the dot product
T
bf{t}
i

bf{t}p

(

=

T
bf{t}
p

bf{t}i

). Likewise, the matrix

XTX

contains the dot products between all the document vectors, giving their correlation over the terms:
T
bf{d}
j

bf{d}q=

T
bf{d}
q

bf{d}j

.

Now, from the theory of linear algebra, there exists a decomposition of

X

such that

U

and

V

are orthogonal matrices and

\Sigma

is a diagonal matrix. This is called a singular value decomposition (SVD):

\begin{matrix} X=U\SigmaVT \end{matrix}

The matrix products giving us the term and document correlations then become

\begin{matrix} XXT&=&(U\SigmaVT)(U\SigmaVT)T=(U\SigmaVT)

TT
(V

\SigmaTUT)=U\SigmaVTV\SigmaTUT=U\Sigma\SigmaTUT\\ XTX&=&(U\SigmaVT)T(U\SigmaVT)=

TT
(V

\SigmaTUT)(U\SigmaVT)=V\SigmaTUTU\SigmaVT=V\SigmaT\SigmaVT \end{matrix}

Since

\Sigma\SigmaT

and

\SigmaT\Sigma

are diagonal we see that

U

must contain the eigenvectors of

XXT

, while

V

must be the eigenvectors of

XTX

. Both products have the same non-zero eigenvalues, given by the non-zero entries of

\Sigma\SigmaT

, or equally, by the non-zero entries of

\SigmaT\Sigma

. Now the decomposition looks like this:

\begin{matrix}&X&&&U&&\Sigma&&VT\\ &(bf{d}j)&&&&&&&(\hat{bf{d}}j)\\ &\downarrow&&&&&&&\downarrow

T)
\\ (bf{t}
i

& \begin{bmatrix} x1,1&...&x1,j&...&x1,n\\ \vdots&\ddots&\vdots&\ddots&\vdots\\ xi,1&...&xi,j&...&xi,n\\ \vdots&\ddots&\vdots&\ddots&\vdots\\ xm,1&...&xm,j&...&xm,n

T)
\\ \end{bmatrix} & = & (\hat{bf{t}}
i

& \begin{bmatrix}\begin{bmatrix}\\bf{u}1\\\end{bmatrix}... \begin{bmatrix}\\bf{u}l\\\end{bmatrix} \end{bmatrix} & & \begin{bmatrix}\sigma1&...&0\\ \vdots&\ddots&\vdots\\ 0&...&\sigmal\\ \end{bmatrix} & & \begin{bmatrix}\begin{bmatrix}&&bf{v}1&&\end{bmatrix}\\ \vdots\\ \begin{bmatrix}&&bf{v}l&&\end{bmatrix} \end{bmatrix} \end{matrix}

The values

\sigma1,...,\sigmal

are called the singular values, and

u1,...,ul

and

v1,...,vl

the left and right singular vectors.Notice the only part of

U

that contributes to

bf{t}i

is the

irm{'th}

row.Let this row vector be called
T
\hat{rm{t}}
i
.Likewise, the only part of

VT

that contributes to

bf{d}j

is the

jrm{'th}

column,

\hat{rm{d}}j

.These are not the eigenvectors, but depend on all the eigenvectors.

It turns out that when you select the

k

largest singular values, and their corresponding singular vectors from

U

and

V

, you get the rank

k

approximation to

X

with the smallest error (Frobenius norm). This approximation has a minimal error. But more importantly we can now treat the term and document vectors as a "semantic space". The row "term" vector
T
\hat{bf{t}}
i
then has

k

entries mapping it to a lower-dimensional space. These new dimensions do not relate to any comprehensible concepts. They are a lower-dimensional approximation of the higher-dimensional space. Likewise, the "document" vector

\hat{bf{d}}j

is an approximation in this lower-dimensional space. We write this approximation as

Xk=Uk\Sigmak

T
V
k

You can now do the following:

j

and

q

are in the low-dimensional space by comparing the vectors

\Sigmak\hat{bf{d}}j

and

\Sigmak\hat{bf{d}}q

(typically by cosine similarity).

i

and

p

by comparing the vectors

\Sigmak\hat{bf{t}}i

and

\Sigmak\hat{bf{t}}p

. Note that

\hat{bf{t}}

is now a column vector.

To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:

\hat{bf{d}}j=

-1
\Sigma
k
T{bf{d}}
U
j

Note here that the inverse of the diagonal matrix

\Sigmak

may be found by inverting each nonzero value within the matrix.

This means that if you have a query vector

q

, you must do the translation

\hat{bf{q}}=

-1
\Sigma
k
T
U
k

bf{q}

before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:
T
bf{t}
i

=

T
\hat{bf{t}}
i

\Sigmak

T
V
k
T
\hat{bf{t}}
i

=

T
bf{t}
i
-T
V
k
-1
\Sigma
k

=

T
bf{t}
i

Vk

-1
\Sigma
k

\hat{bf{t}}i=

-1
\Sigma
k
T
V
k

bf{t}i

Applications

The new low-dimensional space typically can be used to:

Synonymy and polysemy are fundamental problems in natural language processing:

Commercial applications

LSA has been used to assist in performing prior art searches for patents.[8]

Applications in human memory

The use of Latent Semantic Analysis has been prevalent in the study of human memory, especially in areas of free recall and memory search. There is a positive correlation between the semantic similarity of two words (as measured by LSA) and the probability that the words would be recalled one after another in free recall tasks using study lists of random common nouns. They also noted that in these situations, the inter-response time between the similar words was much quicker than between dissimilar words. These findings are referred to as the Semantic Proximity Effect.[9]

When participants made mistakes in recalling studied items, these mistakes tended to be items that were more semantically related to the desired item and found in a previously studied list. These prior-list intrusions, as they have come to be called, seem to compete with items on the current list for recall.[10]

Another model, termed Word Association Spaces (WAS) is also used in memory studies by collecting free association data from a series of experiments and which includes measures of word relatedness for over 72,000 distinct word pairs.[11]

Implementation

The SVD is typically computed using large matrix methods (for example, Lanczos methods) but may also be computed incrementally and with greatly reduced resources via a neural network-like approach, which does not require the large, full-rank matrix to be held in memory.[12] A fast, incremental, low-memory, large-matrix SVD algorithm has recently been developed.[13] MATLAB[14] and Python[15] implementations of these fast algorithms are available. Unlike Gorrell and Webb's (2005) stochastic approximation, Brand's algorithm (2003) provides an exact solution.In recent years progress has been made to reduce the computational complexity of SVD; for instance, by using a parallel ARPACK algorithm to perform parallel eigenvalue decomposition it is possible to speed up the SVD computation cost while providing comparable prediction quality.[16]

Limitations

Some of LSA's drawbacks include:

the (1.3452 * car + 0.2828 * truck) component could be interpreted as "vehicle". However, it is very likely that cases close to

will occur. This leads to results which can be justified on the mathematical level, but have no immediately obvious meaning in natural language. Though, the (1.3452 * car + 0.2828 * bottle) component could be justified because both bottles and cars have transparent and opaque parts, are man made and with high probability contain logos/words on their surface; thus, in many ways these two concepts "share semantics." That is, within a language in question, there may not be a readily available word to assign and explainability becomes an analysis task as opposed to simple word/class/concept assignment task.

Alternative methods

Semantic hashing

In semantic hashing [20] documents are mapped to memory addresses by means of a neural network in such a way that semantically similar documents are located at nearby addresses. Deep neural network essentially builds a graphical model of the word-count vectors obtained from a large set of documents. Documents similar to a query document can then be found by simply accessing all the addresses that differ by only a few bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much faster than locality sensitive hashing, which is the fastest current method.

Latent semantic indexing

Latent semantic indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called singular value decomposition (SVD) to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts.[21]

LSI is also an application of correspondence analysis, a multivariate statistical technique developed by Jean-Paul Benzécri[22] in the early 1970s, to a contingency table built from word counts in documents.

Called " indexing" because of its ability to correlate related terms that are in a collection of text, it was first applied to text at Bellcore in the late 1980s. The method, also called latent semantic analysis (LSA), uncovers the underlying latent semantic structure in the usage of words in a body of text and how it can be used to extract the meaning of the text in response to user queries, commonly referred to as concept searches. Queries, or concept searches, against a set of documents that have undergone LSI will return results that are conceptually similar in meaning to the search criteria even if the results don’t share a specific word or words with the search criteria.

Benefits of LSI

LSI helps overcome synonymy by increasing recall, one of the most problematic constraints of Boolean keyword queries and vector space models. Synonymy is often the cause of mismatches in the vocabulary used by the authors of documents and the users of information retrieval systems.[23] As a result, Boolean or keyword queries often return irrelevant results and miss information that is relevant.

LSI is also used to perform automated document categorization. In fact, several experiments have demonstrated that there are a number of correlations between the way LSI and humans process and categorize text.[24] Document categorization is the assignment of documents to one or more predefined categories based on their similarity to the conceptual content of the categories.[25] LSI uses example documents to establish the conceptual basis for each category. During categorization processing, the concepts contained in the documents being categorized are compared to the concepts contained in the example items, and a category (or categories) is assigned to the documents based on the similarities between the concepts they contain and the concepts that are contained in the example documents.

Dynamic clustering based on the conceptual content of documents can also be accomplished using LSI. Clustering is a way to group documents based on their conceptual similarity to each other without using example documents to establish the conceptual basis for each cluster. This is very useful when dealing with an unknown collection of unstructured text.

Because it uses a strictly mathematical approach, LSI is inherently independent of language. This enables LSI to elicit the semantic content of information written in any language without requiring the use of auxiliary structures, such as dictionaries and thesauri. LSI can also perform cross-linguistic concept searching and example-based categorization. For example, queries can be made in one language, such as English, and conceptually similar results will be returned even if they are composed of an entirely different language or of multiple languages.

LSI is not restricted to working only with words. It can also process arbitrary character strings. Any object that can be expressed as text can be represented in an LSI vector space. For example, tests with MEDLINE abstracts have shown that LSI is able to effectively classify genes based on conceptual modeling of the biological information contained in the titles and abstracts of the MEDLINE citations.[26]

LSI automatically adapts to new and changing terminology, and has been shown to be very tolerant of noise (i.e., misspelled words, typographical errors, unreadable characters, etc.).[27] This is especially important for applications using text derived from Optical Character Recognition (OCR) and speech-to-text conversion. LSI also deals effectively with sparse, ambiguous, and contradictory data.

Text does not need to be in sentence form for LSI to be effective. It can work with lists, free-form notes, email, Web-based content, etc. As long as a collection of text contains multiple terms, LSI can be used to identify patterns in the relationships between the important terms and concepts contained in the text.

LSI has proven to be a useful solution to a number of conceptual matching problems.[28] [29] The technique has been shown to capture key relationship information, including causal, goal-oriented, and taxonomic information.[30]

LSI timeline

Mathematics of LSI

LSI uses common linear algebra techniques to learn the conceptual correlations in a collection of text. In general, the process involves constructing a weighted term-document matrix, performing a Singular Value Decomposition on the matrix, and using the matrix to identify the concepts contained in the text.

Term-document matrix

LSI begins by constructing a term-document matrix,

A

, to identify the occurrences of the

m

unique terms within a collection of

n

documents. In a term-document matrix, each term is represented by a row, and each document is represented by a column, with each matrix cell,

aij

, initially representing the number of times the associated term appears in the indicated document,
tfij
. This matrix is usually very large and very sparse.

Once a term-document matrix is constructed, local and global weighting functions can be applied to it to condition the data. The weighting functions transform each cell,

aij

of

A

, to be the product of a local term weight,

lij

, which describes the relative frequency of a term in a document, and a global weight,

gi

, which describes the relative frequency of the term within the entire collection of documents.

Some common local weighting functions[32] are defined in the following table.

Binary

lij=1

if the term exists in the document, or else

0

TermFrequency

lij=tfij

, the number of occurrences of term

i

in document

j

Log

lij=log(tfij+1)

Augnorm

lij=

(tfij)+1
maxi(tfij)
2

Some common global weighting functions are defined in the following table.

Binary

gi=1

Normal

gi=

1
\sqrt{\sum
2
tf
ij
j
}
GfIdf

gi=gfi/dfi

, where

gfi

is the total number of times term

i

occurs in the whole collection, and

dfi

is the number of documents in which term

i

occurs.
Idf (Inverse Document Frequency)

gi=log2

n
1+dfi
Entropy

gi=1+\sumj

pijlogpij
logn
, where

pij=

tfij
gfi

Empirical studies with LSI report that the Log and Entropy weighting functions work well, in practice, with many data sets.[33] In other words, each entry

aij

of

A

is computed as:

gi=1+\sumj

pijlogpij
logn

aij=gi log(tfij+1)

Rank-reduced singular value decomposition

A rank-reduced, singular value decomposition is performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text. The SVD forms the foundation for LSI.[34] It computes the term and document vector spaces by approximating the single term-frequency matrix,

A

, into three other matrices— an m by r term-concept vector matrix

T

, an r by r singular values matrix

S

, and a n by r concept-document vector matrix,

D

, which satisfy the following relations:

ATSDT

TTT=IrDTD=Ir

S1,1\geqS2,2\geq\ldots\geqSr,r>0Si,j=0whereij

In the formula, A is the supplied m by n weighted matrix of term frequencies in a collection of text where m is the number of unique terms, and n is the number of documents. T is a computed m by r matrix of term vectors where r is the rank of A—a measure of its unique dimensions ≤ min(m,n). S is a computed r by r diagonal matrix of decreasing singular values, and D is a computed n by r matrix of document vectors.

The SVD is then truncated to reduce the rank by keeping only the largest k « r diagonal entries in the singular value matrix S,where k is typically on the order 100 to 300 dimensions.This effectively reduces the term and document vector matrix sizes to m by k and n by k respectively. The SVD operation, along with this reduction, has the effect of preserving the most important semantic information in the text while reducing noise and other undesirable artifacts of the original space of A. This reduced set of matrices is often denoted with a modified formula such as:

A ≈ Ak = Tk Sk DkT

Efficient LSI algorithms only compute the first k singular values and term and document vectors as opposed to computing a full SVD and then truncating it.

Note that this rank reduction is essentially the same as doing Principal Component Analysis (PCA) on the matrix A, except that PCA subtracts off the means. PCA loses the sparseness of the A matrix, which can make it infeasible for large lexicons.

Querying and augmenting LSI vector spaces

The computed Tk and Dk matrices define the term and document vector spaces, which with the computed singular values, Sk, embody the conceptual information derived from the document collection. The similarity of terms or documents within these spaces is a factor of how close they are to each other in these spaces, typically computed as a function of the angle between the corresponding vectors.

The same steps are used to locate the vectors representing the text of queries and new documents within the document space of an existing LSI index. By a simple transformation of the A = T S DT equation into the equivalent D = AT T S−1 equation, a new vector, d, for a query or for a new document can be created by computing a new column in A and then multiplying the new column by T S−1. The new column in A is computed using the originally derived global term weights and applying the same local weighting function to the terms in the query or in the new document.

A drawback to computing vectors in this way, when adding new searchable documents, is that terms that were not known during the SVD phase for the original index are ignored. These terms will have no impact on the global weights and learned correlations derived from the original collection of text. However, the computed vectors for the new text are still very relevant for similarity comparisons with all other document vectors.

The process of augmenting the document vector spaces for an LSI index with new documents in this manner is called folding in. Although the folding-in process does not account for the new semantic content of the new text, adding a substantial number of documents in this way will still provide good results for queries as long as the terms and concepts they contain are well represented within the LSI index to which they are being added. When the terms and concepts of a new set of documents need to be included in an LSI index, either the term-document matrix, and the SVD, must be recomputed or an incremental update method (such as the one described in [13]) is needed.

Additional uses of LSI

It is generally acknowledged that the ability to work with text on a semantic basis is essential to modern information retrieval systems. As a result, the use of LSI has significantly expanded in recent years as earlier challenges in scalability and performance have been overcome.

LSI is being used in a variety of information retrieval and text processing applications, although its primary application has been for concept searching and automated document categorization.[35] Below are some other ways in which LSI is being used:

LSI is increasingly being used for electronic document discovery (eDiscovery) to help enterprises prepare for litigation. In eDiscovery, the ability to cluster, categorize, and search large collections of unstructured text on a conceptual basis is essential. Concept-based searching using LSI has been applied to the eDiscovery process by leading providers as early as 2003.[50]

Challenges to LSI

Early challenges to LSI focused on scalability and performance. LSI requires relatively high computational performance and memory in comparison to other information retrieval techniques.[51] However, with the implementation of modern high-speed processors and the availability of inexpensive memory, these considerations have been largely overcome. Real-world applications involving more than 30 million documents that were fully processed through the matrix and SVD computations are common in some LSI applications. A fully scalable (unlimited number of documents, online training) implementation of LSI is contained in the open source gensim software package.[52]

Another challenge to LSI has been the alleged difficulty in determining the optimal number of dimensions to use for performing the SVD. As a general rule, fewer dimensions allow for broader comparisons of the concepts contained in a collection of text, while a higher number of dimensions enable more specific (or more relevant) comparisons of concepts. The actual number of dimensions that can be used is limited by the number of documents in the collection. Research has demonstrated that around 300 dimensions will usually provide the best results with moderate-sized document collections (hundreds of thousands of documents) and perhaps 400 dimensions for larger document collections (millions of documents).[53] However, recent studies indicate that 50-1000 dimensions are suitable depending on the size and nature of the document collection.[54] Checking the proportion of variance retained, similar to PCA or factor analysis, to determine the optimal dimensionality is not suitable for LSI. Using a synonym test or prediction of missing words are two possible methods to find the correct dimensionality.[55] When LSI topics are used as features in supervised learning methods, one can use prediction error measurements to find the ideal dimensionality.

See also

Further reading

External links

Articles on LSA

Talks and demonstrations

Implementations

Due to its cross-domain applications in Information Retrieval, Natural Language Processing (NLP), Cognitive Science and Computational Linguistics, LSA has been implemented to support many different kinds of applications.

Notes and References

  1. Latent Semantic Analysis . Susan T. Dumais . 2005 . 10.1002/aris.1440380105 . Annual Review of Information Science and Technology . 38 . 188–230.
  2. Web site: US Patent 4,839,853 . https://web.archive.org/web/20171202052723/http://patft.uspto.gov/netacgi/nph-Parser?patentnumber=4839853 . 2017-12-02 . (now expired)
  3. Web site: The Latent Semantic Indexing home page.
  4. Markovsky I. (2012) Low-Rank Approximation: Algorithms, Implementation, Applications, Springer, 2012,
  5. Effect of tuned parameters on an LSA multiple choice questions answering model . Alain Lifchitz . Sandra Jhean-Larose . Guy Denhière . Behavior Research Methods . 41 . 4 . 1201–1209 . 2009 . 10.3758/BRM.41.4.1201 . 19897829 . 0811.0146 . 480826 . free .
  6. Assessing the usefulness of online message board mining in automatic stock prediction systems . Ramiro H. Gálvez . Agustín Gravano . Journal of Computational Science . 19 . 1877–7503 . 2017 . 10.1016/j.jocs.2017.01.001. 11336/60065 . free .
  7. The interpretation of dream meaning: Resolving ambiguity using Latent Semantic Analysis in a small corpus of text . Altszyler, E. . Ribeiro, S. . Sigman, M.. Fernández Slezak, D. . Consciousness and Cognition . 56 . 178–187 . 2017 . 10.1016/j.concog.2017.09.004. 28943127 . 1610.01520 . 195347873 .
  8. Gerry J. Elman . Automated Patent Examination Support - A proposal . Biotechnology Law Report . October 2007 . 10.1089/blr.2007.9896 . 26 . 5 . 435–436 .
  9. APA PsycNet Direct . Contextual Variability and Serial Position Effects in Free Recall . Marc W. Howard . Michael J. Kahana . 1999.
  10. Temporal Associations and Prior-List Intrusions in Free Recall . Franklin M. Zaromb. Interspeech'2005. 2006. etal.
  11. Web site: Nelson. Douglas. The University of South Florida Word Association, Rhyme and Word Fragment Norms. May 8, 2011.
  12. Generalized Hebbian Algorithm for Latent Semantic Analysis . Geneviève Gorrell . Brandyn Webb . Interspeech'2005 . 2005 . dead . https://web.archive.org/web/20081221063926/http://www.dcs.shef.ac.uk/~genevieve/gorrell_webb.pdf . 2008-12-21 .
  13. Fast Low-Rank Modifications of the Thin Singular Value Decomposition . Matthew Brand . Linear Algebra and Its Applications . 415 . 20–30 . 2006 . 10.1016/j.laa.2005.07.021 . free . 2010-03-04 . 2013-12-03 . https://web.archive.org/web/20131203010523/http://www.merl.com/reports/docs/TR2006-059.pdf . dead .
  14. Web site: MATLAB . https://web.archive.org/web/20140228014003/http://web.mit.edu/~wingated/www/resources.html . 2014-02-28.
  15. http://radimrehurek.com/gensim Python
  16. Book: 10.1109/ICCSNT.2011.6182070 . 739–741 . 2011 . Ding . Yaguang . Zhu . Guofeng . Cui . Chenyang . Zhou . Jian . Tao . Liang. Proceedings of 2011 International Conference on Computer Science and Network Technology . A parallel implementation of Singular Value Decomposition based on Map-Reduce and PARPACK . 978-1-4577-1587-7 . 15281129 .
  17. Deerwester. Scott. Dumais. Susan T.. Furnas. George W.. Landauer. Thomas K.. Harshman. Richard. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science. 41. 6. 391–407. 10.1.1.108.8490. 10.1002/(SICI)1097-4571(199009)41:6<391::AID-ASI1>3.0.CO;2-9.
  18. Empirical study using network of semantically related associations in bridging the knowledge gap. Vida. Abedi. Mohammed. Yeasin. Ramin. Zand. 27 November 2014. 12. 1. 324. 10.1186/s12967-014-0324-9. 25428570. 4252998. Journal of Translational Medicine . free .
  19. Probabilistic Latent Semantic Analysis. Thomas Hofmann. Uncertainty in Artificial Intelligence. 1999. 1301.6705.
  20. Salakhutdinov, Ruslan, and Geoffrey Hinton. "Semantic hashing." RBM 500.3 (2007): 500.
  21. Deerwester, S., et al, Improving Information Retrieval with Latent Semantic Indexing, Proceedings of the 51st Annual Meeting of the American Society for Information Science 25, 1988, pp. 36–40.
  22. Book: Benzécri, J.-P. . Dunod . Paris, France . 1973 . L'Analyse des Données. Volume II. L'Analyse des Correspondences .
  23. Furnas . G. W. . Landauer . T. K. . Gomez . L. M. . Dumais . S. T. . The vocabulary problem in human-system communication . 10.1145/32206.32212 . Communications of the ACM . 30 . 11 . 964–971 . 1987 . 10.1.1.118.4768 . 3002280 .
  24. Landauer, T., et al., Learning Human-like Knowledge by Singular Value Decomposition: A Progress Report, M. I. Jordan, M. J. Kearns & S. A. Solla (Eds.), Advances in Neural Information Processing Systems 10, Cambridge: MIT Press, 1998, pp. 45–51.
  25. Book: Dumais . S. . Platt . J. . Heckerman . D. . Sahami . M. . Inductive learning algorithms and representations for text categorization . 10.1145/288627.288651 . Proceedings of the seventh international conference on Information and knowledge management - CIKM '98 . 148 . 1998 . 978-1581130614 . http://research.microsoft.com/en-us/um/people/jplatt/cikm98.pdf . 10.1.1.80.8909 . 617436 .
  26. Homayouni . R. . Heinrich . K. . Wei . L. . Berry . M. W. . Gene clustering by Latent Semantic Indexing of MEDLINE abstracts . 10.1093/bioinformatics/bth464 . Bioinformatics . 21 . 1 . 104–115 . 2004 . 15308538. free .
  27. Book: Price . R. J. . Zukas . A. E. . Application of Latent Semantic Indexing to Processing of Noisy Text . 10.1007/11427995_68 . Intelligence and Security Informatics . Lecture Notes in Computer Science . 3495 . 602 . 2005 . 978-3-540-25999-2 .
  28. Ding, C., A Similarity-based Probability Model for Latent Semantic Indexing, Proceedings of the 22nd International ACM SIGIR Conference on Research and Development in Information Retrieval, 1999, pp. 59–65.
  29. Bartell, B., Cottrell, G., and Belew, R., Latent Semantic Indexing is an Optimal Special Case of Multidimensional Scaling, Proceedings, ACM SIGIR Conference on Research and Development in Information Retrieval, 1992, pp. 161–167.
  30. Graesser, A. . Karnavat, A.. Latent Semantic Analysis Captures Causal, Goal-oriented, and Taxonomic Structures. Proceedings of CogSci 2000. 184–189. 10.1.1.23.5444. 2000.
  31. Book: Dumais . S. . Nielsen . J. . Proceedings of the 15th annual international ACM SIGIR conference on Research and development in information retrieval - SIGIR '92 . Automating the assignment of submitted manuscripts to reviewers . 1992. 233–244. 10.1145/133160.133205. 978-0897915236 . 10.1.1.16.9793 . 15038631 .
  32. Berry, M. W., and Browne, M., Understanding Search Engines: Mathematical Modeling and Text Retrieval, Society for Industrial and Applied Mathematics, Philadelphia, (2005).
  33. Landauer, T., et al., Handbook of Latent Semantic Analysis, Lawrence Erlbaum Associates, 2007.
  34. Berry, Michael W., Dumais, Susan T., O'Brien, Gavin W., Using Linear Algebra for Intelligent Information Retrieval, December 1994, SIAM Review 37:4 (1995), pp. 573–595.
  35. Dumais, S., Latent Semantic Analysis, ARIST Review of Information Science and Technology, vol. 38, 2004, Chapter 4.
  36. Best Practices Commentary on the Use of Search and Information Retrieval Methods in E-Discovery, the Sedona Conference, 2007, pp. 189–223.
  37. Foltz, P. W. and Dumais, S. T. Personalized Information Delivery: An analysis of information filtering methods, Communications of the ACM, 1992, 34(12), 51-60.
  38. Gong, Y., and Liu, X., Creating Generic Text Summaries, Proceedings, Sixth International Conference on Document Analysis and Recognition, 2001, pp. 903–907.
  39. Bradford, R., Efficient Discovery of New Information in Large Text Databases, Proceedings, IEEE International Conference on Intelligence and Security Informatics, Atlanta, Georgia, LNCS Vol. 3495, Springer, 2005, pp. 374–380.
  40. Book: https://link.springer.com/chapter/10.1007/11760146_84 . 10.1007/11760146_84 . Application of Latent Semantic Indexing in Generating Graphs of Terrorist Networks . Intelligence and Security Informatics . Lecture Notes in Computer Science . 2006 . Bradford . R. B. . 3975 . 674–675 . 978-3-540-34478-0 .
  41. Yarowsky, D., and Florian, R., Taking the Load off the Conference Chairs: Towards a Digital Paper-routing Assistant, Proceedings of the 1999 Joint SIGDAT Conference on Empirical Methods in NLP and Very-Large Corpora, 1999, pp. 220–230.
  42. Caron, J., Applying LSA to Online Customer Support: A Trial Study, Unpublished Master's Thesis, May 2000.
  43. Soboroff, I., et al, Visualizing Document Authorship Using N-grams and Latent Semantic Indexing, Workshop on New Paradigms in Information Visualization and Manipulation, 1997, pp. 43–48.
  44. Monay, F., and Gatica-Perez, D., On Image Auto-annotation with Latent Space Models, Proceedings of the 11th ACM international conference on Multimedia, Berkeley, CA, 2003, pp. 275–278.
  45. Book: Maletic, J. . Marcus, A.. Proceedings 12th IEEE Internationals Conference on Tools with Artificial Intelligence. ICTAI 2000 . Using latent semantic analysis to identify similarities in source code to support program understanding . November 13–15, 2000. 46–53. 10.1109/TAI.2000.889845. 978-0-7695-0909-9. 10.1.1.36.6652. 10354564.
  46. Gee, K., Using Latent Semantic Indexing to Filter Spam, in: Proceedings, 2003 ACM Symposium on Applied Computing, Melbourne, Florida, pp. 460–464.
  47. Landauer, T., Laham, D., and Derr, M., From Paragraph to Graph: Latent Semantic Analysis for Information Visualization, Proceedings of the National Academy of Sciences, 101, 2004, pp. 5214–5219.
  48. Foltz, Peter W., Laham, Darrell, and Landauer, Thomas K., Automated Essay Scoring: Applications to Educational Technology, Proceedings of EdMedia, 1999.
  49. Gordon, M., and Dumais, S., Using Latent Semantic Indexing for Literature Based Discovery, Journal of the American Society for Information Science, 49(8), 1998, pp. 674–685.
  50. There Has to be a Better Way to Search, 2008, White Paper, Fios, Inc.
  51. Karypis, G., Han, E., Fast Supervised Dimensionality Reduction Algorithm with Applications to Document Categorization and Retrieval, Proceedings of CIKM-00, 9th ACM Conference on Information and Knowledge Management.
  52. Book: Subspace Tracking for Latent Semantic Analysis . Radim Řehůřek . Advances in Information Retrieval . 6611 . 289–300 . 2011 . 10.1007/978-3-642-20161-5_29 . Lecture Notes in Computer Science. 978-3-642-20160-8.
  53. Bradford, R., An Empirical Study of Required Dimensionality for Large-scale Latent Semantic Indexing Applications, Proceedings of the 17th ACM Conference on Information and Knowledge Management, Napa Valley, California, USA, 2008, pp. 153–162.
  54. Landauer, Thomas K., and Dumais, Susan T., Latent Semantic Analysis, Scholarpedia, 3(11):4356, 2008.
  55. Landauer, T. K., Foltz, P. W., & Laham, D. (1998). Introduction to Latent Semantic Analysis. Discourse Processes, 25, 259-284