Quantile regression is a type of regression analysis used in statistics and econometrics. Whereas the method of least squares estimates the conditional mean of the response variable across values of the predictor variables, quantile regression estimates the conditional median (or other quantiles) of the response variable. Quantile regression is an extension of linear regression used when the conditions of linear regression are not met.
One advantage of quantile regression relative to ordinary least squares regression is that the quantile regression estimates are more robust against outliers in the response measurements. However, the main attraction of quantile regression goes beyond this and is advantageous when conditional quantile functions are of interest. Different measures of central tendency and statistical dispersion can be used to more comprehensively analyze the relationship between variables.[1]
In ecology, quantile regression has been proposed and used as a way to discover more useful predictive relationships between variables in cases where there is no relationship or only a weak relationship between the means of such variables. The need for and success of quantile regression in ecology has been attributed to the complexity of interactions between different factors leading to data with unequal variation of one variable for different ranges of another variable.[2]
Another application of quantile regression is in the areas of growth charts, where percentile curves are commonly used to screen for abnormal growth.[3] [4]
The idea of estimating a median regression slope, a major theorem about minimizing sum of the absolute deviances and a geometrical algorithm for constructing median regression was proposed in 1760 by Ruđer Josip Bošković, a Jesuit Catholic priest from Dubrovnik.[5] He was interested in the ellipticity of the earth, building on Isaac Newton's suggestion that its rotation could cause it to bulge at the equator with a corresponding flattening at the poles.[6] He finally produced the first geometric procedure for determining the equator of a rotating planet from three observations of a surface feature. More importantly for quantile regression, he was able to develop the first evidence of the least absolute criterion and preceded the least squares introduced by Legendre in 1805 by fifty years.[7]
Other thinkers began building upon Bošković's idea such as Pierre-Simon Laplace, who developed the so-called "methode de situation." This led to Francis Edgeworth's plural median[8] - a geometric approach to median regression - and is recognized as the precursor of the simplex method. The works of Bošković, Laplace, and Edgeworth were recognized as a prelude to Roger Koenker's contributions to quantile regression.
Median regression computations for larger data sets are quite tedious compared to the least squares method, for which reason it has historically generated a lack of popularity among statisticians, until the widespread adoption of computers in the latter part of the 20th century.
Quantile regression expresses the conditional quantiles of a dependent variable as a linear function of the explanatory variables. Crucial to the practicality of quantile regression is that the quantiles can be expressed as the solution of a minimization problem, as we will show in this section before discussing conditional quantiles in the next section.
See main article: Quantile function. Let
Y
FY(y)=P(Y\leqy)
\tau
qY
-1 | |
(\tau)=F | |
Y |
(\tau)=inf\left\{y:FY(y)\geq\tau\right\}
\tau\in(0,1).
Define the loss function as
\rho\tau(m)=m(\tau-I(m<0))
I
Y-u
u
qY(\tau)=\underset{u}{argmin
This can be shown by computing the derivative of the expected loss with respect to
u
q\tau
q\tau | |
0=(1-\tau)\int | |
-infty |
dFY
infty | |
(y)-\tau\int | |
q\tau |
dFY(y).
0=FY(q\tau)-\tau,
FY(q\tau)=\tau.
q\tau
\tau
Let
Y
yi=i
i=1,2,...,9
\tau=0.5
Y-u
L(u)=E(\rho\tau(Y-u))=
(\tau-1) | |
9 |
\sum | |
yi<u |
(yi-u)
+ | \tau |
9 |
\sum | |
yi\gequ |
(yi-u)
= | 0.5 |
9 |
l(
-
\sum | |
yi<u |
(yi-u)
+\sum | |
yi\gequ |
(yi-u)
r).
{0.5/9}
\tau=0.5
L(3)
2 | |
\propto\sum | |
i=1 |
-(i-3)
9 | |
+\sum | |
i=3 |
(i-3)
=[(2+1)+(0+1+2+...+6)]=24.
Suppose that u is increased by 1 unit. Then the expected loss will be changed by
(3)-(6)=-3
L(5)\propto
4 | |
\sum | |
i=1 |
4 | |
i+\sum | |
i=0 |
i=20,
{0.5/9}
u | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | |
Expected loss | 36 | 29 | 24 | 21 | 20 | 21 | 24 | 29 | 36 |
Consider
\tau=0.5
q\tau
q | |
L(q)=-0.5\int | |
-infty |
(y-q)dFY
infty | |
(y)+0.5\int | |
q |
(y-q)dFY(y).
q | |
\int | |
-infty |
1dFY
infty | |
(y)-\int | |
q |
1dFY(y).
The first term of the equation is
FY(q)
1-FY(q)
FY(q)<0.5
In order to minimize the expected loss function, we would increase (decrease) L(q) if q is smaller (larger) than the median, until q reaches the median. The idea behind the minimization is to count the number of points (weighted with the density) that are larger or smaller than q and then move q to a point where q is larger than
100\tau
The
\tau
\hat{q}\tau=\underset{q\inR
=\underset{q\inR
\rho\tau
The
\tau
Y
X
\tau
Y
X
QY|X(\tau)=inf\left\{y:FY|X(y)\geq\tau\right\}
Q
In quantile regression for the
\tau
\tau
QY|X(\tau)=X\beta\tau
Y
\beta\tau
\beta\tau=\underset{\beta\inRk
Solving the sample analog gives the estimator of
\beta
\hat{\beta\tau
Note that when
\tau=0.5
\rho\tau
The mathematical forms arising from quantile regression are distinct from those arising in the method of least squares. The method of least squares leads to a consideration of problems in an inner product space, involving projection onto subspaces, and thus the problem of minimizing the squared errors can be reduced to a problem in numerical linear algebra. Quantile regression does not have this structure, and instead the minimization problem can be reformulated as a linear programming problem
\underset{\beta,u+,u-\inRk x
2n | |
R | |
+ |
where
+ | |
u | |
j |
=max(uj,0)
- | |
u | |
j |
=-min(uj,0).
Simplex methods or interior point methods can be applied to solve the linear programming problem.
For
\tau\in(0,1)
\hat{\beta}\tau
\sqrt{n}(\hat{\beta}\tau-\beta\tau)\overset{d}{ → }N(0,\tau(1-\tau)D-1\OmegaxD-1),
D=E(fY(X\beta)XX\prime)
\Omegax=E(X\primeX).
Direct estimation of the asymptotic variance-covariance matrix is not always satisfactory. Inference for quantile regression parameters can be made with the regression rank-score tests or with the bootstrap methods.[9]
See invariant estimator for background on invariance or see equivariance.
For any
a>0
\tau\in[0,1]
\hat{\beta}(\tau;aY,X)=a\hat{\beta}(\tau;Y,X),
\hat{\beta}(\tau;-aY,X)=-a\hat{\beta}(1-\tau;Y,X).
For any
\gamma\inRk
\tau\in[0,1]
\hat{\beta}(\tau;Y+X\gamma,X)=\hat{\beta}(\tau;Y,X)+\gamma.
Let
A
p x p
\tau\in[0,1]
\hat{\beta}(\tau;Y,XA)=A-1\hat{\beta}(\tau;Y,X).
If
h
R
h(QY|X(\tau))\equivQh(Y)|X(\tau).
Example (1):
If
W=\exp(Y)
QY|X(\tau)=X\beta\tau
QW|X(\tau)=\exp(X\beta\tau)
\operatorname{E}(ln(Y)) ≠ ln(\operatorname{E}(Y)).
The linear model
QY|X(\tau)=X\beta\tau
QY|X(\tau)=f(X,\tau)
f( ⋅ ,\tau)
QY|X(\tau)=X\beta\tau
f(X,\tau)
\beta\tau
\nablaf(X,\tau)
\beta\tau
H0:\nablaf(x,\tau)=0
x
H0:\beta\tau=0
\hat{\beta\tau
The goodness of fit for quantile regression for the
\tau
| ||||
R | ||||
\tau}, |
\hat{V}\tau
\tilde{V}\tau
Because quantile regression does not normally assume a parametric likelihood for the conditional distributions of Y|X, the Bayesian methods work with a working likelihood. A convenient choice is the asymmetric Laplacian likelihood,[13] because the mode of the resulting posterior under a flat prior is the usual quantile regression estimates. The posterior inference, however, must be interpreted with care. Yang, Wang and He[14] provided a posterior variance adjustment for valid inference. In addition,Yang and He[15] showed that one can have asymptotically valid posterior inference if the working likelihood is chosen to be the empirical likelihood.
Beyond simple linear regression, there are several machine learning methods that can be extended to quantile regression. A switch from the squared error to the tilted absolute value loss function (a.k.a. the pinball loss[16]) allows gradient descent-based learning algorithms to learn a specified quantile instead of the mean. It means that we can apply all neural network and deep learning algorithms to quantile regression,[17] [18] which is then referred to as nonparametric quantile regression.[19] Tree-based learning algorithms are also available for quantile regression (see, e.g., Quantile Regression Forests,[20] as a simple generalization of Random Forests).
If the response variable is subject to censoring, the conditional mean is not identifiable without additional distributional assumptions, but the conditional quantile is often identifiable. For recent work on censored quantile regression, see: Portnoy[21] and Wang and Wang[22]
Example (2):
Let
Yc=max(0,Y)
QY|X=X\beta\tau
Q | |
Yc|X |
(\tau)=max(0,X\beta\tau)
For random censoring on the response variables, the censored quantile regression of Portnoy (2003) provides consistent estimates of all identifiable quantile functions based on reweighting each censored point appropriately.
Censored quantile regression has close links to survival analysis.
The quantile regression loss needs to be adapted in the presence of heteroscedastic errors in order to be efficient.[25]
Numerous statistical software packages include implementations of quantile regression:
quantreg
[26]quantreg
command.[27]quantreg
by Roger Koenker,[28] but also gbm
,[29] quantregForest
,[30] qrnn
[31] and qgam
[32]Scikit-garden
[33] and statsmodels
[34]proc quantreg
(ver. 9.2)[35] and proc quantselect
(ver. 9.3).[36]qreg
command.[37] [38]--loss_function quantile
.[39]QuantileRegression.m
[40] hosted at the MathematicaForPrediction project at GitHub.QuantileRegression
[41] hosted at Wolfram Function Repository.