In statistics, multivariate adaptive regression splines (MARS) is a form of regression analysis introduced by Jerome H. Friedman in 1991.[1] It is a non-parametric regression technique and can be seen as an extension of linear models that automatically models nonlinearities and interactions between variables.
The term "MARS" is trademarked and licensed to Salford Systems. In order to avoid trademark infringements, many open-source implementations of MARS are called "Earth".[2] [3]
This section introduces MARS using a few examples. We start with a set of data: a matrix of input variables x, and a vector of the observed responses y, with a response for each row in x. For example, the data could be:
x | y | |
---|---|---|
10.5 | 16.4 | |
10.7 | 18.8 | |
10.8 | 19.7 | |
... | ... | |
20.6 | 77.0 |
Here there is only one independent variable, so the x matrix is just a single column. Given these measurements, we would like to build a model which predicts the expected y for a given x.
A linear model for the above data is
\widehat{y}=-37+5.1x
\widehat{y}
\widehat{y}
\widehat{y}
The data at the extremes of x indicates that the relationship between y and x may be non-linear (look at the red dots relative to the regression line at low and high values of x). We thus turn to MARS to automatically build a model taking into account non-linearities. MARS software constructs a model from the given x and y as follows
\begin{align} \widehat{y}=& 25\\ &{}+6.1max(0,x-13)\\ &{}-3.1max(0,13-x) \end{align}
The figure on the right shows a plot of this function: the predicted
\widehat{y}
MARS has automatically produced a kink in the predicted y to take into account non-linearity. The kink is produced by hinge functions. The hinge functions are the expressions starting with
max
max(a,b)
a
a>b
b
In this simple example, we can easily see from the plot that y has a non-linear relationship with x (and might perhaps guess that y varies with the square of x). However, in general there will be multiple independent variables, and the relationship between y and these variables will be unclear and not easily visible by plotting. We can use MARS to discover that non-linear relationship.
An example MARS expression with multiple variables is
\begin{align} ozone=& 5.2\\ &{}+0.93max(0,temp-58)\\ &{}-0.64max(0,temp-68)\\ &{}-0.046max(0,234-ibt)\\ &{}-0.016max(0,wind-7)max(0,200-vis) \end{align}
This expression models air pollution (the ozone level) as a function of the temperature and a few other variables. Note that the last term in the formula (on the last line) incorporates an interaction between
wind
vis
The figure on the right plots the predicted
ozone
wind
vis
To obtain the above expression, the MARS model building procedure automatically selects which variables to use (some variables are important, others not), the positions of the kinks in the hinge functions, and how the hinge functions are combined.
MARS builds models of the form
\widehat{f}(x)=
k | |
\sum | |
i=1 |
ciBi(x).
The model is a weighted sum of basis functions
Bi(x)
ci
Bi(x)
1) a constant 1. There is just one such term, the intercept.In the ozone formula above, the intercept term is 5.2.
2) a hinge function. A hinge function has the form
max(0,x-constant)
max(0,constant-x)
3) a product of two or more hinge functions.These basis functions can model interaction between two or more variables.An example is the last line of the ozone formula.
A key part of MARS models are hinge functions taking the form
max(0,x-c)
max(0,c-x)
c
A hinge function is zero for part of its range, so can be used to partition the data into disjoint regions, each of which can be treated independently. Thus for example a mirrored pair of hinge functions in the expression
6.1max(0,x-13) -3.1max(0,13-x)
One might assume that only piecewise linear functions can be formed from hinge functions, but hinge functions can be multiplied together to form non-linear functions.
Hinge functions are also called ramp, hockey stick, or rectifier functions. Instead of the
max
[\pm(xi-c)]+
[ ⋅ ]+
See also: Stepwise regression.
MARS builds a model in two phases:the forward and the backward pass.This two-stage approach is the same as that used by recursive partitioning trees.
MARS starts with a model which consists of just the intercept term(which is the mean of the response values).
MARS then repeatedly adds basis function in pairs to the model. At each step it finds the pair of basis functions that gives the maximum reduction in sum-of-squares residual error (it is a greedy algorithm). The two basis functions in the pair are identical except that a different side of a mirrored hinge function is used for each function. Each new basis function consists of a term already in the model (which could perhaps be the intercept term) multiplied by a new hinge function. A hinge function is defined by a variable and a knot, so to add a new basis function, MARS must search over all combinations of the following:
1) existing terms (called parent terms in this context)
2) all variables (to select one for the new basis function)
3) all values of each variable (for the knot of the new hinge function).
To calculate the coefficient of each term, MARS applies a linear regression over the terms.
This process of adding terms continues until the change in residual error is too small to continue or until the maximum number of terms is reached. The maximum number of terms is specified by the user before model building starts.
The search at each step is usually done in a brute-force fashion, but a key aspect of MARS is that because of the nature of hinge functions, the search can be done quickly using a fast least-squares update technique. Brute-force search can be sped up by using a heuristic that reduces the number of parent terms considered at each step ("Fast MARS"[4]).
The forward pass usually overfits the model. To build a model with better generalization ability, the backward pass prunes the model, deleting the least effective term at each step until it finds the best submodel. Model subsets are compared using the Generalized cross validation (GCV) criterion described below.
The backward pass has an advantage over the forward pass: at any step it can choose any term to delete, whereas the forward pass at each step can only see the next pair of terms.
The forward pass adds terms in pairs, but the backward pass typically discards one side of the pair and so terms are often not seen in pairs in the final model. A paired hinge can be seen in the equation for
\widehat{y}
The backward pass compares the performance of different models using Generalized Cross-Validation (GCV), a minor variant on the Akaike information criterion that approximates the leave-one-out cross-validation score in the special case where errors are Gaussian, or where the squared error loss function is used. GCV was introduced by Craven and Wahba and extended by Friedman for MARS; lower values of GCV indicate better models. The formula for the GCV is
GCV = RSS / (N · (1 − (effective number of parameters) / N)2)
where RSS is the residual sum-of-squares measured on the training data and N is the number of observations (the number of rows in the x matrix).
The effective number of parameters is defined as
(effective number of parameters) = (number of mars terms) + (penalty) · ((number of Mars terms) − 1) / 2
where penalty is typically 2 (giving results equivalent to the Akaike information criterion) but can be increased by the user if they so desire.
Note that
(number of Mars terms − 1) / 2
is the number of hinge-function knots, so the formula penalizes the addition of knots. Thus the GCV formula adjusts (i.e. increases) the training RSS to penalize more complex models. We penalize flexibility because models that are too flexible will model the specific realization of noise in the data instead of just the systematic structure of the data.
One constraint has already been mentioned: the usercan specify the maximum number of terms in the forward pass.
A further constraint can be placed on the forward passby specifying a maximum allowable degree of interaction.Typically only one or two degrees of interaction are allowed,but higher degrees can be used when the data warrants it.The maximum degree of interaction in the first MARS exampleabove is one (i.e. no interactions or an additive model); in the ozone example it is two.
Other constraints on the forward pass are possible.For example, the user can specify that interactions are allowed only for certain input variables.Such constraints could make sense because of knowledgeof the process that generated the data.
No regression modeling technique is best for all situations.The guidelines below are intended to give an idea of the pros and cons of MARS, but there will be exceptions to the guidelines.It is useful to compare MARS to recursive partitioning and this is done below.(Recursive partitioning is also commonly called regression trees,decision trees, or CART;see the recursive partitioning article for details).
earth
, mda
, and polspline
implementations do not allow missing values in predictors, but free implementations of regression trees (such as rpart
and party
) do allow missing values using a technique called surrogate splits.Several free and commercial software packages are available for fitting MARS-type models.
earth
function in the [https://cran.r-project.org/web/packages/earth/index.html earth]
packagemars
function in the [https://cran.r-project.org/web/packages/mda/index.html mda]
packagepolymars
function in the [https://cran.r-project.org/web/packages/polspline/index.html polspline]
package. Not Friedman's MARS.bass
function in the [https://cran.r-project.org/web/packages/BASS/index.html BASS]
package for Bayesian MARS.