In computer science, learning vector quantization (LVQ) is a prototype-based supervised classification algorithm. LVQ is the supervised counterpart of vector quantization systems.
LVQ can be understood as a special case of an artificial neural network, more precisely, it applies a winner-take-all Hebbian learning-based approach. It is a precursor to self-organizing maps (SOM) and related to neural gas and the k-nearest neighbor algorithm (k-NN). LVQ was invented by Teuvo Kohonen.[1]
An LVQ system is represented by prototypes
W=(w(i),...,w(n))
An advantage of LVQ is that it creates prototypes that are easy to interpret for experts in the respective application domain.LVQ systems can be applied to multi-class classification problems in a natural way.
A key issue in LVQ is the choice of an appropriate measure of distance or similarity for training and classification. Recently, techniques have been developed which adapt a parameterized distance measure in the course of training the system, see e.g. (Schneider, Biehl, and Hammer, 2009)[2] and references therein.
LVQ can be a source of great help in classifying text documents.
Below follows an informal description.
The algorithm consists of three basic steps. The algorithm's input is:
M
\vec{wi}
i=0,1,...,M-1
ci
\vec{wi}
η
L
The algorithm's flow is:
\vec{x}
y
L
\vec{wm}
d(\vec{x},\vec{wm})=min\limitsi{d(\vec{x},\vec{wi})}
d
\vec{wm}
\vec{wm}
\vec{x}
\vec{x}
\vec{wm}
\vec{wm}\gets\vec{wm}+η ⋅ \left(\vec{x}-\vec{wm}\right)
cm=y
\vec{wm}\gets\vec{wm}-η ⋅ \left(\vec{x}-\vec{wm}\right)
cm ≠ y
L
Note:
\vec{wi}
\vec{x}