Functional data analysis explained

Functional data analysis (FDA) is a branch of statistics that analyses data providing information about curves, surfaces or anything else varying over a continuum. In its most general form, under an FDA framework, each sample element of functional data is considered to be a random function. The physical continuum over which these functions are defined is often time, but may also be spatial location, wavelength, probability, etc. Intrinsically, functional data are infinite dimensional. The high intrinsic dimensionality of these data brings challenges for theory as well as computation, where these challenges vary with how the functional data were sampled. However, the high or infinite dimensional structure of the data is a rich source of information and there are many interesting challenges for research and data analysis.

History

Functional data analysis has roots going back to work by Grenander and Karhunen in the 1940s and 1950s.[1] [2] [3] [4] They considered the decomposition of square-integrable continuous time stochastic process into eigencomponents, now known as the Karhunen-Loève decomposition. A rigorous analysis of functional principal components analysis was done in the 1970s by Kleffe, Dauxois and Pousse including results about the asymptotic distribution of the eigenvalues.[5] [6] More recently in the 1990s and 2000s the field has focused more on applications and understanding the effects of dense and sparse observations schemes. The term "Functional Data Analysis" was coined by James O. Ramsay.[7]

Mathematical formalism

Random functions can be viewed as random elements taking values in a Hilbert space, or as a stochastic process. The former is mathematically convenient, whereas the latter is somewhat more suitable from an applied perspective. These two approaches coincide if the random functions are continuous and a condition called mean-squared continuity is satisfied.[8]

Hilbertian random variables

In the Hilbert space viewpoint, one considers an

H

-valued random element

X

, where

H

is a separable Hilbert space such as the space of square-integrable functions

L2[0,1]

. Under the integrability condition that
2
E\|X\|
L2

=E

1
(\int
0

|X(t)|2dt)<infty

, one can define the mean of

X

as the unique element

\mu\inH

satisfying

E\langleX,h\rangle=\langle\mu,h\rangle,    h\inH.

This formulation is the Pettis integral but the mean can also be defined as Bochner integral

\mu=EX

. Under the integrability condition that
2
E\|X\|
L2
is finite, the covariance operator of

X

is a linear operator

l{C}:H\toH

that is uniquely defined by the relation

l{C}h=E[\langleh,X-\mu\rangle(X-\mu)],    h\inH,

or, in tensor form,

l{C}=E[(X-\mu) ⊗ (X-\mu)]

. The spectral theorem allows to decompose

X

as the Karhunen-Loève decomposition
infty
X=\mu+\sum
i=1

\langleX,\varphii\rangle\varphii,

where

\varphii

are eigenvectors of

lC

, corresponding to the nonnegative eigenvalues of

lC

, in a non-increasing order. Truncating this infinite series to a finite order underpins functional principal component analysis.

Stochastic processes

The Hilbertian point of view is mathematically convenient, but abstract; the above considerations do not necessarily even view

X

as a function at all, since common choices of

H

like

L2[0,1]

and Sobolev spaces consist of equivalence classes, not functions. The stochastic process perspective views

X

as a collection of random variables

\{X(t)\}t\in

indexed by the unit interval (or more generally interval

lT

). The mean and covariance functions are defined in a pointwise manner as

\mu(t)=EX(t),    \Sigma(s,t)=rm{Cov}(X(s),X(t)),    s,t\in[0,1]

(if

E[X(t)2]<infty

for all

t\in[0,1]

).

Under the mean square continuity,

\mu

and

\Sigma

are continuous functions and then the covariance function

\Sigma

defines a covariance operator

l{C}:H\toH

given by The spectral theorem applies to

l{C}

, yielding eigenpairs

(λj,\varphij)

, so that in tensor product notation

l{C}

writes
infty
l{C}=\sum
j=1

λj\varphij\varphij.

Moreover, since

lCf

is continuous for all

f\inH

, all the

\varphij

are continuous. Mercer's theorem then states that

\sups,t\in\left|\Sigma(s,t)-

K
\sum
j=1

λj\varphij(s)\varphij(t)\right|\to0,    K\toinfty.

Finally, under the extra assumption that

X

has continuous sample paths, namely that with probability one, the random function

X:[0,1]\toR

is continuous, the Karhunen-Loève expansion above holds for

X

and the Hilbert space machinery can be subsequently applied. Continuity of sample paths can be shown using Kolmogorov continuity theorem.

Functional data designs

X(t),t\in[0,1]

that is an

L2

process on a bounded and closed interval

[0,1]

with mean function

\mu(t)=E(X(t))

and covariance function

\Sigma(s,t)=rm{Cov}(X(s),X(t))

. The realizations of the process for the i-th subject is

Xi()

, and the sample is assumed to consist of

n

independent subjects. The sampling schedule may vary across subjects, denoted as

Ti1,...,

T
iNi
for the i-th subject. The corresponding i-th observation is denoted as

bf{X}i=(Xi1,...,

X
iNi

)

, where

Xij=Xi(Tij)

. In addition, the measurement of

Xij

is assumed to have random noise

\epsilonij

with

E(\epsilonij)=0

and

rm{Var}(\epsilonij

2
)=\sigma
ij
, which are independent across

i

and

j

.

1. Fully observed functions without noise at arbitrarily dense grid

Measurements

Yit=Xi(t)

available for all

t\inl{I},i=1,\ldots,n

Often unrealistic but mathematically convenient.

Real life example: Tecator spectral data.[7]

2. Densely sampled functions with noisy measurements (dense design)

Measurements

Yij=Xi(Tij)+\varepsilonij

, where

Tij

are recorded on a regular grid,

Ti1

,\ldots,T
iNi

, and

Ni\rarrinfty

applies to typical functional data.

Real life example: Berkeley Growth Study Data and Stock data

3. Sparsely sampled functions with noisy measurements (longitudinal data)

Measurements

Yij=Xi(Tij)+\varepsilonij

, where

Tij

are random times and their number

Ni

per subject is random and finite.

Real life example: CD4 count data for AIDS patients.[9]

Functional principal component analysis

Functional principal component analysis (FPCA) is the most prevalent tool in FDA, partly because FPCA facilitates dimension reduction of the inherently infinite-dimensional functional data to finite-dimensional random vector of scores. More specifically, dimension reduction is achieved by expanding the underlying observed random trajectories

Xi(t)

in a functional basis consisting of the eigenfunctions of the covariance operator on

X

. Consider the covariance operator

l{C}:L2[0,1]L2[0,1]

as in, which is a compact operator on Hilbert space.

By Mercer's theorem, the kernel of

l{C}

, i.e., the covariance function

\Sigma(,)

, has spectral decomposition
infty
\Sigma(s,t)=\sum
k=1

λk\varphik(s)\varphik(t)

, where the series convergence is absolute and uniform, and

λk

are real-valued nonnegative eigenvalues in descending order with the corresponding orthonormal eigenfunctions

\varphik(t)

. By the Karhunen–Loève theorem, the FPCA expansion of an underlying random trajectory is

Xi(t)

infty
=\mu(t)+\sum
k=1

Aik\varphik(t)

, where

Aik

1(X
=\int
i(t)-\mu(t))\varphi

k(t)dt

are the functional principal components (FPCs), sometimes referred to as scores. The Karhunen–Loève expansion facilitates dimension reduction in the sense that the partial sum converges uniformly, i.e.,

\supt\in[0,1]

K
E[X
k=1

Aik

2
\varphi
k(t)]

0

as

Kinfty

and thus the partial sum with a large enough

K

yields a good approximation to the infinite sum. Thereby, the information in

Xi

is reduced from infinite dimensional to a

K

-dimensional vector

Ai=(Ai1,...,AiK)

with the approximated process: Other popular bases include spline, Fourier series and wavelet bases. Important applications of FPCA include the modes of variation and functional principal component regression.

Functional linear regression models

Functional linear models can be viewed as an extension of the traditional multivariate linear models that associates vector responses with vector covariates. The traditional linear model with scalar response

Y\inR

and vector covariate

X\inRp

can be expressed aswhere

\langle,\rangle

denotes the inner product in Euclidean space,

\beta0\inR

and

\beta\inRp

denote the regression coefficients, and

\varepsilon

is a zero mean finite variance random error (noise). Functional linear models can be divided into two types based on the responses.

Functional regression models with scalar response

Replacing the vector covariate

X

and the coefficient vector

\beta

in model by a centered functional covariate

Xc(t)=X(t)-\mu(t)

and coefficient function

\beta=\beta(t)

for

t\in[0,1]

and replacing the inner product in Euclidean space by that in Hilbert space

L2

, one arrives at the functional linear modelThe simple functional linear model can be extended to multiple functional covariates,

\{Xj\}

p
j=1
, also including additional vector covariates

Z=(Z1,,Zq)

, where

Z1=1

, bywhere

\theta\in

Rq
is regression coefficient for

Z

, the domain of

Xj

is

[0,1]

,
c
X
j
is the centered functional covariate given by
c(t)
X
j

=Xj(t)-\muj(t)

, and

\betaj

is regression coefficient function for
c
X
j
, for

j=1,\ldots,p

. Models and have been studied extensively.[10] [11] [12]

Functional regression models with functional response

Consider a functional response

Y(s)

on

[0,1]

and multiple functional covariates

Xj(t)

,

t\in[0,1]

,

j=1,\ldots,p

. Two major models have been considered in this setup.[13] [7] One of these two models, generally referred to as functional linear model (FLM), can be written as:where

\alpha0(s)

is the functional intercept, for

j=1,\ldots,p

,
c(t)=X
X
j(t)

-\muj(t)

is a centered functional covariate on

[0,1]

,

\alphaj(s,t)

is the corresponding functional slopes with same domain, respectively, and

\varepsilon(s)

is usually a random process with mean zero and finite variance.[13] In this case, at any given time

s\in[0,1]

, the value of

Y

, i.e.,

Y(s)

, depends on the entire trajectories of
p
\{X
j=1
. Model has been studied extensively.[14] [15] [16] [17]

Function-on-scalar regression

In particular, taking

Xj()

as a constant function yields a special case of model Y(s) = \alpha_0(s)+\sum_^p X_j \alpha_j(s) + \varepsilon(s),\ \text\ s\in[0,1],which is a functional linear model with functional responses and scalar covariates.

Concurrent regression models

This model is given by,where

X1,\ldots,Xp

are functional covariates on

[0,1]

,

\beta0,\beta1,\ldots,\betap

are the coefficient functions defined on the same interval and

\varepsilon(s)

is usually assumed to be a random process with mean zero and finite variance.[13] This model assumes that the value of

Y(s)

depends on the current value of
p
\{X
j=1
only and not the history

\{Xj(t):t\le

p
s\}
j=1
or future value. Hence, it is a "concurrent regression model", which is also referred as "varying-coefficient" model. Further, various estimation methods have been proposed.[18] [19] [20] [21] [22] [23]

Functional nonlinear regression models

Direct nonlinear extensions of the classical functional linear regression models (FLMs) still involve a linear predictor, but combine it with a nonlinear link function, analogous to the idea of generalized linear model from the conventional linear model. Developments towards fully nonparametric regression models for functional data encounter problems such as curse of dimensionality. In order to bypass the "curse" and the metric selection problem, we are motivated to consider nonlinear functional regression models, which are subject to some structural constraints but do not overly infringe flexibility. One desires models that retain polynomial rates of convergence, while being more flexible than, say, functional linear models. Such models are particularly useful when diagnostics for the functional linear model indicate lack of fit, which is often encountered in real life situations. In particular, functional polynomial models, functional single and multiple index models and functional additive models are three special cases of functional nonlinear regression models.

Functional polynomial regression models

Functional polynomial regression models may be viewed as a natural extension of the Functional Linear Models (FLMs) with scalar responses, analogous to extending linear regression model to polynomial regression model. For a scalar response

Y

and a functional covariate

X()

with domain

[0,1]

and the corresponding centered predictor processes

Xc

, the simplest and the most prominent member in the family of functional polynomial regression models is the quadratic functional regression[24] given as follows,\mathbb(Y|X) = \alpha + \int_0^1\beta(t)X^c(t)\,dt + \int_0^1 \int_0^1 \gamma(s,t) X^c(s)X^c(t) \,ds\,dt where

Xc()=X()-E(X())

is the centered functional covariate,

\alpha

is a scalar coefficient,

\beta()

and

\gamma(,)

are coefficient functions with domains

[0,1]

and

[0,1] x [0,1]

, respectively. In addition to the parameter function β that the above functional quadratic regression model shares with the FLM, it also features a parameter surface γ. By analogy to FLMs with scalar responses, estimation of functional polynomial models can be obtained through expanding both the centered covariate

Xc

and the coefficient functions

\beta

and

\gamma

in an orthonormal basis.[25]

Functional single and multiple index models

A functional multiple index model is given as below, with symbols having their usual meanings as formerly described, \mathbb(Y|X) = g\left(\int_0^1X^c(t) \beta_1(t)\,dt, \ldots, \int_0^1X^c(t) \beta_p(t)\,dt \right) Here g represents an (unknown) general smooth function defined on a p-dimensional domain. The case

p=1

yields a functional single index model while multiple index models correspond to the case

p>1

. However, for

p>1

, this model is problematic due to curse of dimensionality. With

p>1

and relatively small sample sizes, the estimator given by this model often has large variance.[26] [27]

Functional additive models (FAMs)

For a given orthonormal basis

\{\phik\}

infty
k=1
on

L2[0,1]

, we can expand

Xc(t)

infty
=\sum
k=1

xk\phik(t)

on the domain

[0,1]

.

A functional linear model with scalar responses (see) can thus be written as follows,\mathbb(Y|X)=\mathbb(Y) + \sum_^\infty \beta_k x_k.One form of FAMs is obtained by replacing the linear function of

xk

in the above expression (i.e.,

\betakxk

) by a general smooth function

fk

, analogous to the extension of multiple linear regression models to additive models and is expressed as,\mathbb(Y|X)=\mathbb(Y) + \sum_^\infty f_k(x_k),where

fk

satisfies

E(fk(xk))=0

for

k\inN

.[13] This constraint on the general smooth functions

fk

ensures identifiability in the sense that the estimates of these additive component functions do not interfere with that of the intercept term

E(Y)

. Another form of FAM is the continuously additive model,[28] expressed as,\mathbb(Y|X) = \mathbb(Y) + \int_0^1 g(t,X(t)) dtfor a bivariate smooth additive surface

g:[0,1] x R\longrightarrowR

which is required to satisfy

E[g(t,X(t))]=0

for all

t\in[0,1]

, in order to ensure identifiability.

Generalized functional linear model

An obvious and direct extension of FLMs with scalar responses (see) is to add a link function leading to a generalized functional linear model (GFLM)[29] in analogy to the generalized linear model (GLM). The three components of the GFLM are:

η=\beta0+

1
\int
0

Xc(t)\beta(t)dt

; [systematic component]

Var(Y|X)=V(\mu)

, where

\mu=E(Y|X)

is the conditional mean; [random component]

g

connecting the conditional mean

\mu

and the linear predictor

η

through

\mu=g(η)

. [systematic component]

Clustering and classification of functional data

For vector-valued multivariate data, k-means partitioning methods and hierarchical clustering are two main approaches. These classical clustering concepts for vector-valued multivariate data have been extended to functional data. For clustering of functional data, k-means clustering methods are more popular than hierarchical clustering methods. For k-means clustering on functional data, mean functions are usually regarded as the cluster centers. Covariance structures have also been taken into consideration.[30] Besides k-means type clustering, functional clustering[31] based on mixture models is also widely used in clustering vector-valued multivariate data and has been extended to functional data clustering.[32] [33] [34] [35] [36] Furthermore, Bayesian hierarchical clustering also plays an important role in the development of model-based functional clustering.[37] [38] [39] [40]

Functional classification assigns a group membership to a new data object either based on functional regression or functional discriminant analysis. Functional data classification methods based on functional regression models use class levels as responses and the observed functional data and other covariates as predictors. For regression based functional classification models, functional generalized linear models or more specifically, functional binary regression, such as functional logistic regression for binary responses, are commonly used classification approaches. More generally, the generalized functional linear regression model based on the FPCA approach is used.[41] Functional Linear Discriminant Analysis (FLDA) has also been considered as a classification method for functional data.[42] [43] [44] [45] [46] Functional data classification involving density ratios has also been proposed.[47] A study of the asymptotic behavior of the proposed classifiers in the large sample limit shows that under certain conditions the misclassification rate converges to zero, a phenomenon that has been referred to as "perfect classification".[48]

Time warping

Motivations

In addition to amplitude variation,[49] time variation may also be assumed to present in functional data. Time variation occurs when the subject-specific timing of certain events of interest varies among subjects. One classical example is the Berkeley Growth Study Data,[50] where the amplitude variation is the growth rate and the time variation explains the difference in children's biological age at which the pubertal and the pre-pubertal growth spurt occurred. In the presence of time variation, the cross-sectional mean function may not be an efficient estimate as peaks and troughs are located randomly and thus meaningful signals may be distorted or hidden.

Time warping, also known as curve registration,[51] curve alignment or time synchronization, aims to identify and separate amplitude variation and time variation. If both time and amplitude variation are present, then the observed functional data

Yi

can be modeled as

Yi(t)=Xi[h

-1
i

(t)],t\in[0,1]

, where

Xi\overset{iid}{\sim}X

is a latent amplitude function and

hi\overset{iid}{\sim}h

is a latent time warping function that corresponds to a cumulative distribution function. The time warping functions

h

are assumed to be invertible and to satisfy

E(h-1(t))=t

.

The simplest case of a family of warping functions to specify phase variation is linear transformation, that is

h(t)=\delta+\gammat

, which warps the time of an underlying template function by subjected-specific shift and scale. More general class of warping functions includes diffeomorphisms of the domain to itself, that is, loosely speaking, a class of invertible functions that maps the compact domain to itself such that both the function and its inverse are smooth. The set of linear transformation is contained in the set of diffeomorphisms.[52] One challenge in time warping is identifiability of amplitude and phase variation. Specific assumptions are required to break this non-identifiability.

Methods

Earlier approaches include dynamic time warping (DTW) used for applications such as speech recognition.[53] Another traditional method for time warping is landmark registration,[54] [55] which aligns special features such as peak locations to an average location. Other relevant warping methods include pairwise warping,[56] registration using

l{L}2

distance and elastic warping.[57]

Dynamic time warping

The template function is determined through an iteration process, starting from cross-sectional mean, performing registration and recalculating the cross-sectional mean for the warped curves, expecting convergence after a few iterations. DTW minimizes a cost function through dynamic programming. Problems of non-smooth differentiable warps or greedy computation in DTW can be resolved by adding a regularization term to the cost function.

Landmark registration

Landmark registration (or feature alignment) assumes well-expressed features are present in all sample curves and uses the location of such features as a gold-standard. Special features such as peak or trough locations in functions or derivatives are aligned to their average locations on the template function. Then the warping function is introduced through a smooth transformation from the average location to the subject-specific locations. A problem of landmark registration is that the features may be missing or hard to identify due to the noise in the data.

Extensions

So far we considered scalar valued stochastic process,

\{X(t)\}t\in

, defined on one dimensional time domain.

Multidimensional domain of

X()

The domain of

X()

can be in

Rp

, for example the data could be a sample of random surfaces.[58] [59]

Multivariate stochastic process

The range set of the stochastic process may be extended from

R

to

Rp

[60] [61] [62] and further to nonlinear manifolds,[63] Hilbert spaces[64] and eventually to metric spaces.

Python packages

There are Python packages to work with functional data, and its representation, perform exploratory analysis, or preprocessing, and among other tasks such as inference, classification, regression or clustering of functional data.

R packages

Some packages can handle functional data under both dense and longitudinal designs.

See also

Further reading

Notes and References

  1. Grenander. U.. 1950. Stochastic processes and statistical inference. Arkiv för Matematik. 1. 3. 195–277. 10.1007/BF02590638. 1950ArM.....1..195G. 120451372. free.
  2. Rice. JA. Silverman. BW.. 1991. Estimating the mean and covariance structure nonparametrically when the data are curves. Journal of the Royal Statistical Society. 53. 1. 233–243. 10.1111/j.2517-6161.1991.tb01821.x .
  3. Müller. HG.. 2016. Peter Hall, functional data analysis and random objects. Annals of Statistics. 44. 5. 1867–1887. 10.1214/16-AOS1492. free.
  4. Book: Karhunen, K. Zur Spektraltheorie stochastischer Prozesse. Annales Academiae scientiarum Fennicae. 1946.
  5. Kleffe. J.. 1973. Principal components of random variables with values in a seperable hilbert space. Mathematische Operationsforschung und Statistik. 4. 5. 391–406. 10.1080/02331887308801137.
  6. Dauxois. J. Pousse. A. Romain. Y.. 1982. Asymptotic theory for the principal component analysis of a vector random function: Some applications to statistical inference. Journal of Multivariate Analysis. 12. 1. 136–154. 10.1016/0047-259X(82)90088-4. free.
  7. Book: Ramsay. J. Functional Data Analysis, 2nd ed. Silverman. BW.. Springer. 2005.
  8. Book: Hsing. T. Theoretical Foundations of Functional Data Analysis, with an Introduction to Linear Operators. Eubank. R. Wiley Series in Probability and Statistics. 2015.
  9. Shi. M. Weiss. RE. Taylor. JMG.. 1996. An analysis of paediatric CD4 counts for acquired immune deficiency syndrome using flexible random curves. Journal of the Royal Statistical Society. Series C (Applied Statistics). 45. 2. 151–163.
  10. Hilgert. N. Mas. A. Verzelen. N.. 2013. Minimax adaptive tests for the functional linear model. Annals of Statistics. 41. 2. 838–869. 10.1214/13-AOS1093. 13119710. 1206.1194.
  11. Kong. D. Xue. K. Yao. F. Zhang. HH.. 2016. Partially functional linear regression in high dimensions. Biometrika. 103. 1. 147–159. 10.1093/biomet/asv062.
  12. Book: Horváth. L. Inference for functional data with applications. Kokoszka. P.. 2012. Springer-Verlag. Springer Series in Statistics.
  13. Wang. JL. Chiou. JM. Müller. HG.. 2016. Functional data analysis. Annual Review of Statistics and Its Application. 3. 1. 257–295. 10.1146/annurev-statistics-041715-033624. 2016AnRSA...3..257W. 13709250 . free.
  14. Ramsay. JO. Dalzell. CJ.. 1991. Some tools for functional data analysis. Journal of the Royal Statistical Society, Series B (Methodological). 53. 3. 539–561. 10.1111/j.2517-6161.1991.tb01844.x. 118960346 .
  15. Malfait. N. Ramsay. JO.. 2003. The historical functional linear model. The Canadian Journal of Statistics. 31. 2. 115–128. 10.2307/3316063. 3316063. 55092204.
  16. He. G. Müller. HG. Wang. JL.. 2003. Functional canonical analysis for square integrable stochastic processes. Journal of Multivariate Analysis. 85. 1. 54–77. 10.1016/S0047-259X(02)00056-8.
  17. He. G. Müller. HG. Wang. JL. Yang. WJ.. 2010. Functional linear regression via canonical analysis. Journal of Multivariate Analysis. 16. 3. 705–729. 10.3150/09-BEJ228. 1102.5212. 17843044.
  18. Fan. J. Zhang. W.. 1999. Statistical estimation in varying coefficient models. The Annals of Statistics. 27. 5. 1491–1518. 10.1214/aos/1017939139. 16758288. free.
  19. Wu. CO. Yu. KF.. 2002. Nonparametric varying-coefficient models for the analysis of longitudinal data. International Statistical Review. 70. 3. 373–393. 10.1111/j.1751-5823.2002.tb00176.x. 122007787.
  20. Huang. JZ. Wu. CO. Zhou. L.. 2002. Varying-coefficient models and basis function approximations for the analysis of repeated measurements. Biometrika. 89. 1. 111–128. 10.1093/biomet/89.1.111.
  21. Huang. JZ. Wu. CO. Zhou. L.. 2004. Polynomial spline estimation and inference for varying coefficient models with longitudinal data. Statistica Sinica. 14. 3. 763–788.
  22. Şentürk. D. Müller. HG.. 2010. Functional varying coefficient models for longitudinal data. Journal of the American Statistical Association. 105. 491. 1256–1264. 10.1198/jasa.2010.tm09228. 14296231.
  23. Eggermont. PPB. Eubank. RL. LaRiccia. VN.. 2010. Convergence rates for smoothing spline estimators in varying coefficient models. Journal of Statistical Planning and Inference. 140. 2. 369–381. 10.1016/j.jspi.2009.06.017.
  24. Yao, F; Müller, HG. (2010). "Functional quadratic regression". Biometrika. 97 (1):49–64.
  25. Horváth. L. Reeder. R.. 2013. A test of significance in functional quadratic regression. Bernoulli. 19. 5A. 2120–2151. 10.3150/12-BEJ446. 88512527. free. 1105.0014.
  26. Chen, D; Hall, P; Müller HG. (2011). "Single and multiple index functional regression models with nonparametric link". The Annals of Statistics. 39 (3):1720–1747.
  27. Jiang, CR; Wang JL. (2011). "Functional single index models for longitudinal data". he Annals of Statistics. 39 (1):362–388.
  28. Müller HG; Wu Y; Yao. F.. 2013. Continuously additive models for nonlinear functional regression. Biometrika. 100. 3. 607–622. 10.1093/biomet/ast004.
  29. Müller HG; Stadmüller. U.. 2005. Generalized Functional Linear Models. The Annals of Statistics. 33. 2. 774–805. 10.1214/009053604000001156. math/0505638.
  30. Chiou. JM. Li. PL.. 2007. Functional clustering and identifying substructures of longitudinal data. Journal of the Royal Statistical Society, Series B (Statistical Methodology). 69. 4. 679–699. 10.1111/j.1467-9868.2007.00605.x. 120883171. free.
  31. Banfield. JD. Raftery. AE.. 1993. Model-based Gaussian and non-Gaussian clustering. Biometrics. 49. 3. 803–821. 10.2307/2532201. 2532201.
  32. James. GM. Sugar. CA.. 2003. Clustering for sparsely sampled functional data. Journal of the American Statistical Association. 98. 462. 397–408. 10.1198/016214503000189. 9487422.
  33. Jacques. J. Preda. C.. 2013. Funclust: A curves clustering method using functional random variables density approximation. Neurocomputing. 112. 164–171. 10.1016/j.neucom.2012.11.042. 33591208.
  34. Jacques. J. Preda. C.. 2014. Model-based clustering for multivariate functional data. Computational Statistics & Data Analysis. 71. C. 92–106. 10.1016/j.csda.2012.12.004.
  35. Coffey. N. Hinde. J. Holian. E.. 2014. Clustering longitudinal profiles using P-splines and mixed effects models applied to time-course gene expression data. Computational Statistics & Data Analysis. 71. C. 14–29. 10.1016/j.csda.2013.04.001.
  36. Heinzl. F. Tutz. G.. 2014. Clustering in linear-mixed models with a group fused lasso penalty. Biometrical Journal. 56. 1. 44–68. 10.1002/bimj.201200111. 24249100. 10969266.
  37. Angelini. C. Canditiis. DD. Pensky. M.. 2012. Clustering time-course microarray data using functional Bayesian infinite mixture model. Journal of Applied Statistics. 39. 1. 129–149. 10.1080/02664763.2011.578620. 2012JApSt..39..129A. 8902492.
  38. Rodríguez. A. Dunson. DB. Gelfand. AE.. 2009. Bayesian nonparametric functional data analysis through density estimation. Biometrika. 96. 1. 149–162. 10.1093/biomet/asn054. 19262739. 2650433.
  39. Petrone. S. Guindani. M. Gelfand. AE.. 2009. Hybrid Dirichlet mixture models for functional data. Journal of the Royal Statistical Society. 71. 4. 755–782. 10.1111/j.1467-9868.2009.00708.x. 18638091.
  40. Heinzl. F. Tutz. G.. 2013. Clustering in linear mixed models with approximate Dirichlet process mixtures using EM algorithm. Statistical Modelling. 13. 1. 41–67. 10.1177/1471082X12471372. 11448616.
  41. Leng. X. Müller. HG.. 2006. Classification using functional data analysis for temporal gene expression data. Bioinformatics. 22. 1. 68–76. 10.1093/bioinformatics/bti742. 16257986. free.
  42. James. GM. Hastie. TJ.. 2001. Functional linear discriminant analysis for irregularly sampled curves. Journal of the Royal Statistical Society. 63. 3. 533–550. 10.1111/1467-9868.00297. 16050693. free.
  43. Hall. P. Poskitt. DS. Presnell. B.. 2001. A Functional Data—Analytic Approach to Signal Discrimination. Technometrics. 43. 1. 1–9. 10.1198/00401700152404273. 21662019.
  44. Ferraty. F. Vieu. P.. 2003. Curves discrimination: a nonparametric functional approach. Computational Statistics & Data Analysis. 44. 1–2. 161–173. 10.1016/S0167-9473(03)00032-X.
  45. Chang. C. Chen. Y. Ogden. RT.. 2014. Functional data classification: a wavelet approach. Computational Statistics. 29. 6. 1497–1513. 10.1007/s00180-014-0503-4. 120454400. 11192549.
  46. Zhu. H. Brown. PJ. Morris. JS.. 2012. Robust Classification of Functional and Quantitative Image Data Using Functional Mixed Models. Biometrics. 68. 4. 1260–1268. 10.1111/j.1541-0420.2012.01765.x. 22670567. 3443537.
  47. Dai. X. Müller. HG. Yao. F.. 2017. Optimal Bayes classifiers for functional data and density ratios. Biometrika. 104. 3. 545–560. 1605.03707.
  48. Delaigle. A. Hall. P. 2012. Achieving near perfect classification for functional data. Journal of the Royal Statistical Society. Series B (Statistical Methodology). 74. 2. 267–286. 10.1111/j.1467-9868.2011.01003.x. 124261587. 1369-7412. free.
  49. Wang. JL. Chiou. JM. Müller. HG.. 2016. Functional Data Analysis. Annual Review of Statistics and Its Application. 3. 1. 257–295. 10.1146/annurev-statistics-041715-033624. 2016AnRSA...3..257W. 13709250 . free.
  50. Gasser. T. Müller. HG. Kohler. W. Molinari. L. Prader. A.. 1984. Nonparametric regression analysis of growth curves. The Annals of Statistics. 12. 1. 210–229.
  51. Ramsay. JO. Li. X.. 1998. Curve registration. Journal of the Royal Statistical Society, Series B. 60. 2. 351–363. 10.1111/1467-9868.00129. 17175587. free.
  52. Marron. JS. Ramsay. JO. Sangalli. LM. Srivastava. A. 2015. Functional data analysis of amplitude and phase variation. Statistical Science. 30. 4. 468–484. 10.1214/15-STS524. 1512.03216. 55849758.
  53. Sakoe. H. Chiba. S.. 1978. Dynamic programming algorithm optimization for spoken word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing. 26. 43–49. 10.1109/TASSP.1978.1163055. 17900407.
  54. Kneip. A. Gasser. T. 1992. Statistical tools to analyze data representing a sample of curves. Annals of Statistics. 20. 3. 1266–1305. 10.1214/aos/1176348769. free.
  55. Gasser. T. Kneip. A. 1995. Searching for structure in curve sample. Journal of the American Statistical Association. 90. 432. 1179–1188.
  56. Tang. R. Müller. HG.. 2008. Pairwise curve synchronization for functional data. Biometrika. 95. 4. 875–889. 10.1093/biomet/asn047.
  57. Anirudh. R. Turaga. P. Su. J. Srivastava. A. 2015. Elastic functional coding of human actions: From vector-fields to latent variables. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3147–3155.
  58. Dubey. P. Müller. HG. 2021. Modeling Time-Varying Random Objects and Dynamic Networks. Journal of the American Statistical Association. 117. 540. 2252–2267. 10.1080/01621459.2021.1917416. 2104.04628. 233210300.
  59. Pigoli. D. Hadjipantelis. PZ. Coleman. JS. Aston. JAD. 2017. The statistical analysis of acoustic phonetic data: exploring differences between spoken Romance languages. Journal of the Royal Statistical Society. Series C (Applied Statistics). 67. 5. 1130–1145.
  60. Happ. C. Greven. S. 2018. Multivariate Functional Principal Component Analysis for Data Observed on Different (Dimensional) Domains. Journal of the American Statistical Association. 113. 522. 649–659. 10.1080/01621459.2016.1273115. 88521295. 1509.02029.
  61. Chiou. JM. Yang. YF. Chen. YT. 2014. Multivariate functional principal component analysis: a normalization approach. Statistica Sinica. 24. 1571–1596.
  62. Carroll. C. Müller. HG. Kneip. A. 2021. Cross-component registration for multivariate functional data, with application to growth curves. Biometrics. 77. 3. 839–851. 10.1111/biom.13340. 1811.01429. 220687157.
  63. Dai. X. Müller. HG. 2018. Principal component analysis for functional data on Riemannian manifolds and spheres. The Annals of Statistics. 46. 6B. 3334–3361. 10.1214/17-AOS1660. 13671221. 1705.06226.
  64. Chen. K. Delicado. P. Müller. HG. 2017. Modelling function-valued stochastic processes, with applications to fertility dynamics. Journal of the Royal Statistical Society. Series B (Statistical Methodology). 79. 1. 177–196. 10.1111/rssb.12160. 2117/126653. 13719492. free.
  65. Yao. F. Müller. HG. Wang. JL.. 2005. Functional data analysis for sparse longitudinal data. Journal of the American Statistical Association. 100. 470. 577–590. 10.1198/016214504000001745. 1243975.