Modes of variation explained

In statistics, modes of variation[1] are a continuously indexed set of vectors or functions that are centered at a mean and are used to depict the variation in a population or sample. Typically, variation patterns in the data can be decomposed in descending order of eigenvalues with the directions represented by the corresponding eigenvectors or eigenfunctions. Modes of variation provide a visualization of this decomposition and an efficient description of variation around the mean. Both in principal component analysis (PCA) and in functional principal component analysis (FPCA), modes of variation play an important role in visualizing and describing the variation in the data contributed by each eigencomponent.[2] In real-world applications, the eigencomponents and associated modes of variation aid to interpret complex data, especially in exploratory data analysis (EDA).

Formulation

Modes of variation are a natural extension of PCA and FPCA.

If a random vector

X=(X1,X2,,

T
X
p)
has the mean vector

\boldsymbol{\mu}p

, and the covariance matrix

\Sigmap x

with eigenvalues

λ1\geqλ2\geq\geqλp\geq0

and corresponding orthonormal eigenvectors

e1,e2,,ep

, by eigendecomposition of a real symmetric matrix, the covariance matrix

\Sigma

can be decomposed as

\Sigma=QΛQT,

where

Q

is an orthogonal matrix whose columns are the eigenvectors of

\Sigma

, and

Λ

is a diagonal matrix whose entries are the eigenvalues of

\Sigma

. By the Karhunen–Loève expansion for random vectors, one can express the centered random vector in the eigenbasis
p\xi
X-\boldsymbol{\mu}=\sum
ke

k,

where

\xik=e

T(X-\boldsymbol{\mu})
k
is the principal component[3] associated with the

k

-th eigenvector

ek

, with the properties

\operatorname{E}(\xik)=0,\operatorname{Var}(\xik)k,

and

\operatorname{E}(\xik\xil)=0 forlk.

Then the

k

-th mode of variation of

X

is the set of vectors, indexed by

\alpha

,

mk,=\boldsymbol{\mu}\pm\alpha\sqrt{λk}ek,\alpha\in[-A,A],

where

A

is typically selected as

2 or 3

.

X(t),t\inl{T}\subsetRp

, where typically

p=1

and

l{T}

is an interval, denote the mean function by

\mu(t)=\operatorname{E}(X(t))

, and the covariance function by

G(s,t)=\operatorname{Cov}(X(s),X(t))=

infty
\sum
k=1

λk\varphik(s)\varphik(t),

where

λ1\geqλ2\geq\geq0

are the eigenvalues and

\{\varphi1,\varphi2,\}

are the orthonormal eigenfunctions of the linear Hilbert–Schmidt operator

G:L2(l{T})L2(l{T}),G(f)=\intl{T}G(s,t)f(s)ds.

By the Karhunen–Loève theorem, one can express the centered function in the eigenbasis,

X(t)-\mu(t)=

infty
\sum
k=1

\xik\varphik(t),

where

\xik=\intl{T}(X(t)-\mu(t))\varphik(t)dt

is the

k

-th principal component with the properties

\operatorname{E}(\xik)=0,\operatorname{Var}(\xik)=λk,

and

\operatorname{E}(\xik\xil)=0forl\nek.

Then the

k

-th mode of variation of

X(t)

is the set of functions, indexed by

\alpha

,

mk,(t)=\mu(t)\pm\alpha\sqrt{λk}\varphik(t),t\inl{T}, \alpha\in[-A,A]

that are viewed simultaneously over the range of

\alpha

, usually for

A=2 or 3

.

Estimation

The formulation above is derived from properties of the population. Estimation is needed in real-world applications. The key idea is to estimate mean and covariance.

Suppose the data

x1,x2,,xn

represent

n

independent drawings from some

p

-dimensional population

X

with mean vector

\boldsymbol{\mu}

and covariance matrix

\Sigma

. These data yield the sample mean vector

\overline{x

}, and the sample covariance matrix

S

with eigenvalue-eigenvector pairs

(\hat{λ}1,\hat{e

}_1), (\hat_2, \hat_2), \cdots, (\hat_p, \hat_p). Then the

k

-th mode of variation of

X

can be estimated by

\hat{m

}_=\overline\pm \alpha\sqrt\hat_k, \alpha\in [-A, A].

Consider

n

realizations

X1(t),X2(t),,Xn(t)

of a square-integrable random function

X(t),t\inl{T}

with the mean function

\mu(t)=\operatorname{E}(X(t))

and the covariance function

G(s,t)=\operatorname{Cov}(X(s),X(t))

. Functional principal component analysis provides methods for the estimation of

\mu(t)

and

G(s,t)

in detail, often involving point wise estimate and interpolation. Substituting estimates for the unknown quantities, the

k

-th mode of variation of

X(t)

can be estimated by

\hat{m}k,(t)=\hat{\mu}(t)\pm\alpha\sqrt{\hat{λ}k}\hat{\varphi}k(t),t\inl{T},\alpha\in[-A,A].

Applications

Modes of variation are useful to visualize and describe the variation patterns in the data sorted by the eigenvalues. In real-world applications, modes of variation associated with eigencomponents allow to interpret complex data, such as the evolution of function traits[4] and other infinite-dimensional data.[5] To illustrate how modes of variation work in practice, two examples are shown in the graphs to the right, which display the first two modes of variation. The solid curve represents the sample mean function. The dashed, dot-dashed, and dotted curves correspond to modes of variation with

\alpha=\pm1,\pm2,

and

\pm3

, respectively.

The first graph displays the first two modes of variation of female mortality data from 41 countries in 2003.[6] The object of interest is log hazard function between ages 0 and 100 years. The first mode of variation suggests that the variation of female mortality is smaller for ages around 0 or 100, and larger for ages around 25. An appropriate and intuitive interpretation is that mortality around 25 is driven by accidental death, while around 0 or 100, mortality is related to congenital disease or natural death.

Compared to female mortality data, modes of variation of male mortality data shows higher mortality after around age 20, possibly related to the fact that life expectancy for women is higher than that for men.

Notes and References

  1. Castro. P. E.. Lawton. W. H.. Sylvestre. E. A.. November 1986. Principal Modes of Variation for Processes with Continuous Sample Curves. Technometrics. 28. 4. 329. 10.2307/1268982. 1268982. 0040-1706.
  2. Wang. Jane-Ling. Chiou. Jeng-Min. Müller. Hans-Georg. June 2016. Functional Data Analysis. Annual Review of Statistics and Its Application. 3. 1. 257–295. 10.1146/annurev-statistics-041715-033624. 2326-8298. free.
  3. Kleffe. Jürgen. January 1973. Principal components of random variables with values in a seperable hilbert space. Mathematische Operationsforschung und Statistik. 4. 5. 391–406. 10.1080/02331887308801137. 0047-6277.
  4. Kirkpatrick. Mark. Heckman. Nancy. August 1989. A quantitative genetic model for growth, shape, reaction norms, and other infinite-dimensional characters. Journal of Mathematical Biology. 27. 4. 429–450. 10.1007/bf00290638. 2769086. 46336613. 0303-6812.
  5. Jones. M. C.. Rice. John A.. May 1992. Displaying the Important Features of Large Collections of Similar Curves. The American Statistician. 46. 2. 140–145. 10.1080/00031305.1992.10475870. 0003-1305.
  6. Web site: Human Mortality Database. www.mortality.org. 2020-03-12.