Smoothing spline explained

Smoothing splines are function estimates,

\hatf(x)

, obtained from a set of noisy observations

yi

of the target

f(xi)

, in order to balance a measure of goodness of fit of

\hatf(xi)

to

yi

with a derivative based measure of the smoothness of

\hatf(x)

. They provide a means for smoothing noisy

xi,yi

data. The most familiar example is the cubic smoothing spline, but there are many other possibilities, including for the case where

x

is a vector quantity.

Cubic spline definition

Let

\{xi,Yi:i=1,...,n\}

be a set of observations, modeled by the relation

Yi=f(xi)+\epsiloni

where the

\epsiloni

are independent, zero mean random variables. The cubic smoothing spline estimate

\hatf

of the function

f

is defined to be the unique minimizer, in the Sobolev space
2
W
2
on a compact interval, of[1] [2]
n
\sum
i=1

\{Yi-\hat

2
f(x
i)\}

+λ\int\hat{f}\prime\prime(x)2dx.

Remarks:

λ\ge0

is a smoothing parameter, controlling the trade-off between fidelity to the data and roughness of the function estimate. This is often estimated by generalized cross-validation,[3] or by restricted marginal likelihood (REML) which exploits the link between spline smoothing and Bayesian estimation (the smoothing penalty can be viewed as being induced by a prior on the

f

).[4]

xi

.

λ\to0

(no smoothing), the smoothing spline converges to the interpolating spline.

λ\toinfty

(infinite smoothing), the roughness penalty becomes paramount and the estimate converges to a linear least squares estimate.

xi

, second or third-order differences were used in the penalty, rather than derivatives.[5]

\epsiloni

.

Derivation of the cubic smoothing spline

It is useful to think of fitting a smoothing spline in two steps:

  1. First, derive the values

\hatf(xi);i=1,\ldots,n

.
  1. From these values, derive

\hatf(x)

for all x.

Now, treat the second step first.

Given the vector

\hat{m}=(\hatf(x1),\ldots,\hat

T
f(x
n))
of fitted values, the sum-of-squares part of the spline criterion is fixed. It remains only to minimize

\int\hatf''(x)2dx

, and the minimizer is a natural cubic spline that interpolates the points

(xi,\hatf(xi))

. This interpolating spline is a linear operator, and can be written in the form

\hatf(x)=

n
\sum
i=1

\hatf(xi)fi(x)

where

fi(x)

are a set of spline basis functions. As a result, the roughness penalty has the form

\int\hatf''(x)2dx=\hat{m}TA\hat{m}.

where the elements of A are

\intfi''(x)fj''(x)dx

. The basis functions, and hence the matrix A, depend on the configuration of the predictor variables

xi

, but not on the responses

Yi

or

\hatm

.

A is an n×n matrix given by

A=\DeltaTW-1\Delta

.

Δ is an (n-2)×n matrix of second differences with elements:

\Deltaii=1/hi

,

\Deltai,i+1=-1/hi-1/hi+1

,

\Deltai,i+2=1/hi+1

W is an (n-2)×(n-2) symmetric tri-diagonal matrix with elements:

Wi-1,i=Wi,i-1=hi/6

,

Wii=(hi+hi+1)/3

and

hi=\xii+1-\xii

, the distances between successive knots (or x values).

Now back to the first step. The penalized sum-of-squares can be written as

\{Y-\hatm\}T\{Y-\hatm\}+λ\hat{m}TA\hatm,

where

Y=(Y1,\ldots,Y

T
n)
.

Minimizing over

\hatm

by differentiating against

\hatm

. This results in:

-2\{Y-\hatm\}+2λA\hatm=0

[6] and

\hatm=(I+λA)-1Y.

De Boor's approach

De Boor's approach exploits the same idea, of finding a balance between having a smooth curve and being close to the given data.[7]

n
p\sum
i=1

\left(

Yi-\hatf\left(xi\right)
\deltai

\right)2+\left(1-p\right)\int\left(\hatf\left\left(x\right)\right)2dx

where

p

is a parameter called smooth factor and belongs to the interval

[0,1]

, and

\deltai;i=1,...,n

are the quantities controlling the extent of smoothing (they represent the weight
-2
\delta
i
of each point

Yi

). In practice, since cubic splines are mostly used,

m

is usually

2

. The solution for

m=2

was proposed by Christian Reinsch in 1967. For

m=2

, when

p

approaches

1

,

\hatf

converges to the "natural" spline interpolant to the given data. As

p

approaches

0

,

\hatf

converges to a straight line (the smoothest curve). Since finding a suitable value of

p

is a task of trial and error, a redundant constant

S

was introduced for convenience.[8]

S

is used to numerically determine the value of

p

so that the function

\hatf

meets the following condition:
n
\sum
i=1

\left(

Yi-\hatf\left(xi\right)
\deltai

\right)2\leS

The algorithm described by de Boor starts with

p=0

and increases

p

until the condition is met. If

\deltai

is an estimation of the standard deviation for

Yi

, the constant

S

is recommended to be chosen in the interval

\left[n-\sqrt{2n},n+\sqrt{2n}\right]

. Having

S=0

means the solution is the "natural" spline interpolant. Increasing

S

means we obtain a smoother curve by getting farther from the given data.

Multidimensional splines

There are two main classes of method for generalizing from smoothing with respect to a scalar

x

to smoothing with respect to a vector

x

. The first approach simply generalizes the spline smoothing penalty to the multidimensional setting. For example, if trying to estimate

f(x,z)

we might use the Thin plate spline penalty and find the

\hatf(x,z)

minimizing
n
\sum
i=1

\{yi-\hatf(xi,z

2
i)\}

+λ\int\left[\left(

\partial2\hatf
\partialx2

\right)2+2\left(

\partial2\hatf
\partialx\partialz

\right)2+\left(

\partial2\hatf
\partialz2

\right)2\right]rm{d}xrm{d}z.

The thin plate spline approach can be generalized to smoothing with respect to more than two dimensions and to other orders of differentiation in the penalty.[1] As the dimension increases there are some restrictions on the smallest order of differential that can be used,[1] but actually Duchon's original paper,[9] gives slightly more complicated penalties that can avoid this restriction.

The thin plate splines are isotropic, meaning that if we rotate the

x,z

co-ordinate system the estimate will not change, but also that we are assuming that the same level of smoothing is appropriate in all directions. This is often considered reasonable when smoothing with respect to spatial location, but in many other cases isotropy is not an appropriate assumption and can lead to sensitivity to apparently arbitrary choices of measurement units. For example, if smoothing with respect to distance and time an isotropic smoother will give different results if distance is measure in metres and time in seconds, to what will occur if we change the units to centimetres and hours.

The second class of generalizations to multi-dimensional smoothing deals directly with this scale invariance issue using tensor product spline constructions.[10] [11] [12] Such splines have smoothing penalties with multiple smoothing parameters, which is the price that must be paid for not assuming that the same degree of smoothness is appropriate in all directions.

Related methods

See also: Curve fitting. Smoothing splines are related to, but distinct from:

Source code

Source code for spline smoothing can be found in the examples from Carl de Boor's book A Practical Guide to Splines. The examples are in the Fortran programming language. The updated sources are available also on Carl de Boor's official site http://pages.cs.wisc.edu/~deboor/.

Further reading

Notes and References

  1. Book: Green, P. J.. Nonparametric Regression and Generalized Linear Models: A roughness penalty approach. Silverman. B.W.. 1994. Chapman and Hall.
  2. Book: Hastie, T. J.. Generalized Additive Models. Tibshirani, R. J. . 1990. Chapman and Hall. 978-0-412-34390-2.
  3. P.. Craven. G.. Wahba. Smoothing noisy data with spline functions. Numerische Mathematik. 1979. 31. 4. 377–403. 10.1007/bf01404567.
  4. G.S.. Kimeldorf. G.. Wahba. A Correspondence between Bayesian Estimation on Stochastic Processes and Smoothing by Splines. The Annals of Mathematical Statistics. 1970. 41. 2. 495–502. 10.1214/aoms/1177697089. free.
  5. E.T.. Whittaker. On a new method of graduation. Proceedings of the Edinburgh Mathematical Society. 1922. 41. 63–75.
  6. Web site: Rodriguez. German. Smoothing and Non-Parametric Regression. 28 April 2024. 2.3.1 Computation. 12. English. Spring 2001.
  7. Book: De Boor, C.. A Practical Guide to Splines (Revised Edition). 2001. Springer. 207–214. 978-0-387-90356-9.
  8. Smoothing by Spline Functions. Reinsch, Christian H. Christian Reinsch. 10.1007/BF02162161. 10. 3. Numerische Mathematik. 177–183. 1967.
  9. J. Duchon, 1976, Splines minimizing rotation invariant semi-norms in Sobolev spaces. pp 85–100, In: Constructive Theory of Functions of Several Variables, Oberwolfach 1976, W. Schempp and K. Zeller, eds., Lecture Notes in Math., Vol. 571, Springer, Berlin, 1977
  10. Book: Wahba, Grace. Spline Models for Observational Data. SIAM.
  11. Book: Gu, Chong. 2013. Smoothing Spline ANOVA Models (2nd ed.). Springer.
  12. Book: Wood, S. N.. Generalized Additive Models: An Introduction with R (2nd ed). Chapman & Hall/CRC. 2017. 978-1-58488-474-3.
  13. Flexible smoothing with B-splines and penalties. Eilers, P.H.C. and Marx B.. 11. 2. Statistical Science. 89–121. 1996.
  14. Book: Ruppert, David . Semiparametric Regression. Wand, M. P. . Carroll, R. J.. Cambridge University Press. 2003. 978-0-521-78050-6.