Manifold regularization explained

In machine learning, Manifold regularization is a technique for using the shape of a dataset to constrain the functions that should be learned on that dataset. In many machine learning problems, the data to be learned do not cover the entire input space. For example, a facial recognition system may not need to classify any possible image, but only the subset of images that contain faces. The technique of manifold learning assumes that the relevant subset of data comes from a manifold, a mathematical structure with useful properties. The technique also assumes that the function to be learned is smooth: data with different labels are not likely to be close together, and so the labeling function should not change quickly in areas where there are likely to be many data points. Because of this assumption, a manifold regularization algorithm can use unlabeled data to inform where the learned function is allowed to change quickly and where it is not, using an extension of the technique of Tikhonov regularization. Manifold regularization algorithms can extend supervised learning algorithms in semi-supervised learning and transductive learning settings, where unlabeled data are available. The technique has been used for applications including medical imaging, geographical imaging, and object recognition.

Manifold regularizer

Motivation

Manifold regularization is a type of regularization, a family of techniques that reduces overfitting and ensures that a problem is well-posed by penalizing complex solutions. In particular, manifold regularization extends the technique of Tikhonov regularization as applied to Reproducing kernel Hilbert spaces (RKHSs). Under standard Tikhonov regularization on RKHSs, a learning algorithm attempts to learn a function

f

from among a hypothesis space of functions

l{H}

. The hypothesis space is an RKHS, meaning that it is associated with a kernel

K

, and so every candidate function

f

has a norm

\left\|f\right\|K

, which represents the complexity of the candidate function in the hypothesis space. When the algorithm considers a candidate function, it takes its norm into account in order to penalize complex functions.

Formally, given a set of labeled training data

(x1,y1),\ldots,(x\ell,y\ell)

with

xi\inX,yi\inY

and a loss function

V

, a learning algorithm using Tikhonov regularization will attempt to solve the expression

\underset{f\inl{H}}{\argmin}

1
\ell
\ell
\sum
i=1

V(f(xi),yi)+\gamma\left\|f

2
\right\|
K

where

\gamma

is a hyperparameter that controls how much the algorithm will prefer simpler functions over functions that fit the data better.Manifold regularization adds a second regularization term, the intrinsic regularizer, to the ambient regularizer used in standard Tikhonov regularization. Under the manifold assumption in machine learning, the data in question do not come from the entire input space

X

, but instead from a nonlinear manifold

M\subsetX

. The geometry of this manifold, the intrinsic space, is used to determine the regularization norm.[1]

Laplacian norm

There are many possible choices for the intrinsic regularizer

\left\|f\right\|I

. Many natural choices involve the gradient on the manifold

\nablaM

, which can provide a measure of how smooth a target function is. A smooth function should change slowly where the input data are dense; that is, the gradient

\nablaMf(x)

should be small where the marginal probability density

l{P}X(x)

, the probability density of a randomly drawn data point appearing at

x

, is large. This gives one appropriate choice for the intrinsic regularizer:

\left\|f

2
\right\|
I

=\intx\left\|\nablaMf(x)\right\|2dl{P}X(x)

In practice, this norm cannot be computed directly because the marginal distribution

l{P}X

is unknown, but it can be estimated from the provided data.

Graph-based approach of the Laplacian norm

When the distances between input points are interpreted as a graph, then the Laplacian matrix of the graph can help to estimate the marginal distribution. Suppose that the input data include

\ell

labeled examples (pairs of an input

x

and a label

y

) and

u

unlabeled examples (inputs without associated labels). Define

W

to be a matrix of edge weights for a graph, where

Wij

is a distance measure between the data points

xi

and

xj

. Define

D

to be a diagonal matrix with

Dii=

\ell+u
\sum
j=1

Wij

and

L

to be the Laplacian matrix

D-W

. Then, as the number of data points

\ell+u

increases,

L

converges to the Laplace–Beltrami operator

\DeltaM

, which is the divergence of the gradient

\nablaM

.[2] [3] Then, if

f

is a vector of the values of

f

at the data,

f=[f(x1),\ldots,f(xl+u)]T

, the intrinsic norm can be estimated:

\left\|f

2
\right\|
I

=

1
(\ell+u)2

fTLf

As the number of data points

\ell+u

increases, this empirical definition of

\left\|f

2
\right\|
I
converges to the definition when

l{P}X

is known.

Solving the regularization problem with graph-based approach

Using the weights

\gammaA

and

\gammaI

for the ambient and intrinsic regularizers, the final expression to be solved becomes:

\underset{f\inl{H}}{\argmin}

1
\ell
\ell
\sum
i=1

V(f(xi),yi)+\gammaA\left\|f

2
\right\|
K

+

\gammaI
(\ell+u)2

fTLf

As with other kernel methods,

l{H}

may be an infinite-dimensional space, so if the regularization expression cannot be solved explicitly, it is impossible to search the entire space for a solution. Instead, a representer theorem shows that under certain conditions on the choice of the norm

\left\|f\right\|I

, the optimal solution

f*

must be a linear combination of the kernel centered at each of the input points: for some weights

\alphai

,

f*(x)=

\ell+u
\sum
i=1

\alphaiK(xi,x)

Using this result, it is possible to search for the optimal solution

f*

by searching the finite-dimensional space defined by the possible choices of

\alphai

.

Functional approach of the Laplacian norm

The idea beyond graph-Laplacian is to use neighbors to estimate Laplacian. This method is akin local averaging methods, that are known to scale poorly in high-dimensional problem.Indeed, graph Laplacian is known to suffer from the curse of dimensionality.Luckily, it is possible to leverage expected smoothness of the function to estimate thanks to more advanced functional analysis.This method consists in estimating the Laplacian operator thanks to derivatives of the kernel reading

\partial1,K(xi,x)

where

\partial1,j

denotes the partial derivatives according to the j-th coordinate of the first variable.[4] This second approach of the Laplacian norm is to put in relation with meshfree methods, that contrast with the finite difference method in PDE.

Applications

Manifold regularization can extend a variety of algorithms that can be expressed using Tikhonov regularization, by choosing an appropriate loss function

V

and hypothesis space

l{H}

. Two commonly used examples are the families of support vector machines and regularized least squares algorithms. (Regularized least squares includes the ridge regression algorithm; the related algorithms of LASSO and elastic net regularization can be expressed as support vector machines.[5] [6]) The extended versions of these algorithms are called Laplacian Regularized Least Squares (abbreviated LapRLS) and Laplacian Support Vector Machines (LapSVM), respectively.

Laplacian Regularized Least Squares (LapRLS)

Regularized least squares (RLS) is a family of regression algorithms: algorithms that predict a value

y=f(x)

for its inputs

x

, with the goal that the predicted values should be close to the true labels for the data. In particular, RLS is designed to minimize the mean squared error between the predicted values and the true labels, subject to regularization. Ridge regression is one form of RLS; in general, RLS is the same as ridge regression combined with the kernel method. The problem statement for RLS results from choosing the loss function

V

in Tikhonov regularization to be the mean squared error:

f*=\underset{f\inl{H}}{\argmin}

1
\ell
\ell
\sum
i=1

(f(xi)-

2
y
i)

+\gamma\left\|f

2
\right\|
K

Thanks to the representer theorem, the solution can be written as a weighted sum of the kernel evaluated at the data points:

f*(x)=

\ell
\sum
i=1
*
\alpha
i

K(xi,x)

and solving for

\alpha*

gives:

\alpha*=(K+\gamma\ellI)-1Y

where

K

is defined to be the kernel matrix, with

Kij=K(xi,xj)

, and

Y

is the vector of data labels.

Adding a Laplacian term for manifold regularization gives the Laplacian RLS statement:

f*=\underset{f\inl{H}}{\argmin}

1
\ell
\ell
\sum
i=1

(f(xi)-

2
y
i)

+\gammaA\left\|f

2
\right\|
K

+

\gammaI
(\ell+u)2

fTLf

The representer theorem for manifold regularization again gives

f*(x)=

\ell+u
\sum
i=1
*
\alpha
i

K(xi,x)

and this yields an expression for the vector

\alpha*

. Letting

K

be the kernel matrix as above,

Y

be the vector of data labels, and

J

be the

(\ell+u) x (\ell+u)

block matrix

\begin{bmatrix}I\ell&0\ 0&0u\end{bmatrix}

:

\alpha*=\underset{\alpha\inR\ell

} \frac (Y - J K \alpha)^ (Y - J K \alpha) + \gamma_A \alpha^ K \alpha + \frac \alpha^ K L K \alpha

with a solution of

\alpha*=\left(JK+\gammaA\ellI+

\gammaI\ell
(\ell+u)2

LK\right)-1Y

LapRLS has been applied to problems including sensor networks,[7] medical imaging,[8] [9] object detection,[10] spectroscopy,[11] document classification,[12] drug-protein interactions,[13] and compressing images and videos.[14]

Laplacian Support Vector Machines (LapSVM)

Support vector machines (SVMs) are a family of algorithms often used for classifying data into two or more groups, or classes. Intuitively, an SVM draws a boundary between classes so that the closest labeled examples to the boundary are as far away as possible. This can be directly expressed as a linear program, but it is also equivalent to Tikhonov regularization with the hinge loss function,

V(f(x),y)=max(0,1-yf(x))

:

f*=\underset{f\inl{H}}{\argmin}

1
\ell
\ell
\sum
i=1

max(0,1-yif(xi))+\gamma\left\|f

2
\right\|
K
[15] [16]

Adding the intrinsic regularization term to this expression gives the LapSVM problem statement:

f*=\underset{f\inl{H}}{\argmin}

1
\ell
\ell
\sum
i=1

max(0,1-yif(xi))+\gammaA\left\|f

2
\right\|
K

+

\gammaI
(\ell+u)2

fTLf

Again, the representer theorem allows the solution to be expressed in terms of the kernel evaluated at the data points:

f*(x)=

\ell+u
\sum
i=1
*
\alpha
i

K(xi,x)

\alpha

can be found by writing the problem as a linear program and solving the dual problem. Again letting

K

be the kernel matrix and

J

be the block matrix

\begin{bmatrix}I\ell&0\ 0&0u\end{bmatrix}

, the solution can be shown to be

\alpha=\left(2\gammaAI+2

\gammaI
(\ell+u)2

LK\right)-1JTY\beta*

where

\beta*

is the solution to the dual problem

\begin{align} &&\beta*=

max
\beta\inR\ell

&

\ell
\sum
i=1

\betai-

1
2

\betaTQ\beta\\ &subjectto&&

\ell
\sum
i=1

\betaiyi=0\\ &&&0\le\betai\le

1
\ell

i=1,\ldots,\ell \end{align}

and

Q

is defined by

Q=YJK\left(2\gammaAI+2

\gammaI
(\ell+u)2

LK\right)-1JTY

LapSVM has been applied to problems including geographical imaging,[17] [18] [19] medical imaging,[20] [21] [22] face recognition,[23] machine maintenance,[24] and brain–computer interfaces.[25]

Limitations

\left\|f\right\|I

can be very close to the ambient norm

\left\|f\right\|K

: for example, if the data consist of two classes that lie on perpendicular lines, the intrinsic norm will be equal to the ambient norm. In this case, unlabeled data have no effect on the solution learned by manifold regularization, even if the data fit the algorithm's assumption that the separator should be smooth. Approaches related to co-training have been proposed to address this limitation.[27]

K

becomes very large, and a manifold regularization algorithm may become prohibitively slow to compute. Online algorithms and sparse approximations of the manifold may help in this case.[28]

See also

External links

Software

Notes and References

  1. 7. 2399–2434. Belkin. Mikhail. Niyogi. Partha. Sindhwani. Vikas. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. The Journal of Machine Learning Research. 2015-12-02. 2006.
  2. Book: Springer. 470–485. Hein. Matthias. Audibert. Jean-Yves. Von Luxburg. Ulrike . Ulrike von Luxburg. Learning theory. 3559. From graphs to manifolds–weak and strong pointwise consistency of graph laplacians. 2005. 10.1007/11503415_32. Lecture Notes in Computer Science. 978-3-540-26556-6. 10.1.1.103.82.
  3. Book: Springer. 486–500. Belkin. Mikhail. Niyogi. Partha. Learning theory. 3559. Towards a theoretical foundation for Laplacian-based manifold methods. 2005. 10.1007/11503415_33. Lecture Notes in Computer Science. 978-3-540-26556-6. 10.1.1.127.795.
  4. Cabannes. Vivien. Pillaud-Vivien. Loucas. Bach. Francis. Rudi. Alessandro. 2021. Overcoming the curse of dimensionality with Laplacian regularization in semi-supervised learning. stat.ML. 2009.04324.
  5. Book: Jaggi, Martin . An Equivalence between the Lasso and Support Vector Machines. Suykens. Johan. Signoretto. Marco. Argyriou. Andreas. 2014. Chapman and Hall/CRC.
  6. Zhou. Quan. Chen. Wenlin. Song. Shiji. Gardner. Jacob. Weinberger. Kilian. Chen. Yixin. A Reduction of the Elastic Net to Support Vector Machines with an Application to GPU Computing. Association for the Advancement of Artificial Intelligence.
  7. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999. 21. 988. Pan. Jeffrey Junfeng. Yang. Qiang. Chang. Hong. Yeung. Dit-Yan. A manifold regularization approach to calibration reduction for sensor-network based tracking. Proceedings of the national conference on artificial intelligence. 2015-12-02. 2006.
  8. IEEE. 1628–1631. Zhang. Daoqiang. Shen. Dinggang. Semi-supervised multimodal classification of Alzheimer's disease. Biomedical Imaging: From Nano to Macro, 2011 IEEE International Symposium on. 2011. 10.1109/ISBI.2011.5872715.
  9. Book: Springer. 264–271. Park. Sang Hyun. Gao. Yaozong. Shi. Yinghuan. Shen. Dinggang. Machine Learning in Medical Imaging. 8679. Interactive Prostate Segmentation Based on Adaptive Feature Selection and Manifold Regularization. 2014. 10.1007/978-3-319-10581-9_33. Lecture Notes in Computer Science. 978-3-319-10580-2.
  10. Pillai. Sudeep. Semi-supervised Object Detector Learning from Minimal Labels. 2015-12-15.
  11. 11. 1. 416–419. Wan. Songjing. Wu. Di. Liu. Kangsheng. Semi-Supervised Machine Learning Algorithm in Near Infrared Spectral Calibration: A Case Study on Diesel Fuels. Advanced Science Letters. 2012. 10.1166/asl.2012.3044.
  12. 8. 4. 1011–1018. Wang. Ziqiang. Sun. Xia. Zhang. Lijie. Qian. Xu. Document Classification based on Optimal Laprls. Journal of Software. 2013. 10.4304/jsw.8.4.1011-1018.
  13. 4. Suppl 2. –6. Xia. Zheng. Wu. Ling-Yun. Zhou. Xiaobo. Wong. Stephen TC. Semi-supervised drug-protein interaction prediction from heterogeneous biological spaces. BMC Systems Biology. 2010. 10.1.1.349.7173. 10.1186/1752-0509-4-S2-S6. 20840733. 2982693. free.
  14. ACM. 161–168. Cheng. Li. Vishwanathan. S. V. N.. Learning to compress images and videos. Proceedings of the 24th international conference on Machine learning. 2015-12-16. 2007.
  15. 48. 1–3. 115–136. Lin. Yi. Wahba. Grace. Zhang. Hao. Lee. Yoonkyung. Yoonkyung Lee . Statistical properties and adaptive tuning of support vector machines. Machine Learning. 2002. 10.1023/A:1013951620650. free.
  16. 6. 69–87. Wahba. Grace. others. Support vector machines, reproducing kernel Hilbert spaces and the randomized GACV. Advances in Kernel Methods-Support Vector Learning. 1999. 10.1.1.53.2114.
  17. 48. 11. 4110–4121. Kim. Wonkook. Crawford. Melba M.. Melba Crawford. Adaptive classification for hyperspectral image data using manifold regularization kernel machines. IEEE Transactions on Geoscience and Remote Sensing. 2010. 10.1109/TGRS.2010.2076287. 29580629.
  18. 31. 1. 45–54. Camps-Valls. Gustavo. Tuia. Devis. Bruzzone. Lorenzo. Atli Benediktsson. Jon. Advances in hyperspectral image classification: Earth monitoring with statistical learning methods. IEEE Signal Processing Magazine. 2014. 10.1109/msp.2013.2279179. 1310.5107. 2014ISPM...31...45C. 11945705.
  19. IEEE. 1521–1524. Gómez-Chova. Luis. Camps-Valls. Gustavo. Muñoz-Marí. Jordi. Calpe. Javier. Semi-supervised cloud screening with Laplacian SVM. Geoscience and Remote Sensing Symposium, 2007. IGARSS 2007. IEEE International. 2007. 10.1109/IGARSS.2007.4423098.
  20. Book: Springer. 82–90. Cheng. Bo. Zhang. Daoqiang. Shen. Dinggang. Medical Image Computing and Computer-Assisted Intervention–MICCAI 2012. 7510. Domain transfer learning for MCI conversion prediction. Pt 1. 2012. 10.1007/978-3-642-33415-3_11. 23285538. 3761352. Lecture Notes in Computer Science. 978-3-642-33414-6.
  21. 37. 8. 4155–4172. Jamieson. Andrew R.. Giger. Maryellen L.. Drukker. Karen. Pesce. Lorenzo L.. Enhancement of breast CADx with unlabeled dataa). Medical Physics. 2010. 10.1118/1.3455704. 20879576. 2921421. 2010MedPh..37.4155J.
  22. 1. 2. 151–155. Wu. Jiang. Diao. Yuan-Bo. Li. Meng-Long. Fang. Ya-Ping. Ma. Dai-Chuan. A semi-supervised learning based method: Laplacian support vector machine used in diabetes disease diagnosis. Interdisciplinary Sciences: Computational Life Sciences. 2009. 10.1007/s12539-009-0016-2. 20640829. 21860700.
  23. 4. 17. Wang. Ziqiang. Zhou. Zhiqiang. Sun. Xia. Qian. Xu. Sun. Lijun. Enhanced LapSVM Algorithm for Face Recognition.. International Journal of Advancements in Computing Technology. 2015-12-16. 2012.
  24. 38. 8. 10199–10204. Zhao. Xiukuan. Li. Min. Xu. Jinwu. Song. Gangbing. An effective procedure exploiting unlabeled data to build monitoring system. Expert Systems with Applications. 2011. 10.1016/j.eswa.2011.02.078.
  25. 7. 1. 22–26. Zhong. Ji-Ying. Lei. Xu. Yao. D.. Semi-supervised learning based on manifold in BCI. Journal of Electronics Science and Technology of China. 2015-12-16. 2009.
  26. Zhu. Xiaojin. Semi-supervised learning literature survey. 2005. 10.1.1.99.9681.
  27. ACM. 976–983. Sindhwani. Vikas. Rosenberg. David S.. An RKHS for multi-view learning and manifold co-regularization. Proceedings of the 25th international conference on Machine learning. 2015-12-02. 2008.
  28. Book: 393–407. Goldberg. Andrew. Li. Ming. Zhu. Xiaojin. Machine Learning and Knowledge Discovery in Databases. Online Manifold Regularization: A New Learning Setting and Empirical Study. 5211. 2008. 10.1007/978-3-540-87479-9_44. Lecture Notes in Computer Science. 978-3-540-87478-2.