In econometrics, the autoregressive conditional heteroskedasticity (ARCH) model is a statistical model for time series data that describes the variance of the current error term or innovation as a function of the actual sizes of the previous time periods' error terms;[1] often the variance is related to the squares of the previous innovations. The ARCH model is appropriate when the error variance in a time series follows an autoregressive (AR) model; if an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model.[2]
ARCH models are commonly employed in modeling financial time series that exhibit time-varying volatility and volatility clustering, i.e. periods of swings interspersed with periods of relative calm. ARCH-type models are sometimes considered to be in the family of stochastic volatility models, although this is strictly incorrect since at time t the volatility is completely predetermined (deterministic) given previous values.[3]
To model a time series using an ARCH process, let
~\epsilont~
~\epsilont~
zt
\sigmat
~\epsilont=\sigmatzt~
The random variable
zt
2 | |
\sigma | |
t |
2=\alpha | |
\sigma | |
0+\alpha |
1
2+ … +\alpha | |
\epsilon | |
q |
2 | |
\epsilon | |
t-q |
=\alpha0+
q | |
\sum | |
i=1 |
\alphai
2 | |
\epsilon | |
t-i |
where
~\alpha0>0~
\alphai\ge0,~i>0
An ARCH(q) model can be estimated using ordinary least squares. A method for testing whether the residuals
\epsilont
yt=a0+a1yt-1+ … +aqyt-q+\epsilont=a0+
q | |
\sum | |
i=1 |
aiyt-i+\epsilont
\hat\epsilon2
\hat
2 | |
\epsilon | |
t |
=\alpha0+
q | |
\sum | |
i=1 |
\alphai\hat
2 | |
\epsilon | |
t-i |
where q is the length of ARCH lags.
\alphai=0
i=1, … ,q
\alphai
\chi2
T'
T'=T-q
If an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model.
In that case, the GARCH (p, q) model (where p is the order of the GARCH terms
~\sigma2
~\epsilon2
yt=x'tb+\epsilont
\epsilont|\psit-1
2 | |
\siml{N}(0,\sigma | |
t) |
2=\omega | |
\sigma | |
t |
+\alpha1
2 | |
\epsilon | |
t-1 |
+ … +\alphaq
2 | |
\epsilon | |
t-q |
+\beta1
2 | |
\sigma | |
t-1 |
+ … +\betap\sigma
2 | |
t-p |
=\omega+
q | |
\sum | |
i=1 |
\alphai
2 | |
\epsilon | |
t-i |
+
p | |
\sum | |
i=1 |
\betai
2 | |
\sigma | |
t-i |
Generally, when testing for heteroskedasticity in econometric models, the best test is the White test. However, when dealing with time series data, this means to test for ARCH and GARCH errors.
Exponentially weighted moving average (EWMA) is an alternative model in a separate class of exponential smoothing models. As an alternative to GARCH modelling it has some attractive properties such as a greater weight upon more recent observations, but also drawbacks such as an arbitrary decay factor that introduces subjectivity into the estimation.
The lag length p of a GARCH(p, q) process is established in three steps:
yt=a0+a1yt-1+ … +aqyt-q+\epsilont=a0+
q | |
\sum | |
i=1 |
aiyt-i+\epsilont
\epsilon2
\rho=
T | |
{{\sum | |
t=i+1 |
(\hat
2 | |
\epsilon | |
t |
-\hat
2 | |
\sigma | |
t) |
(\hat
2 | |
\epsilon | |
t-1 |
-\hat
2 | |
\sigma | |
t-1 |
)}\over
T | |
{\sum | |
t=1 |
(\hat
2 | |
\epsilon | |
t |
-\hat
2}} | |
\sigma | |
t) |
\rho(i)
1/\sqrt{T}
\chi2
2 | |
\epsilon | |
t |
Nonlinear Asymmetric GARCH(1,1) (NAGARCH) is a model with the specification:
2= | |
~\sigma | |
t |
~\omega+~\alpha(~\epsilont-1-~\theta~\sigmat-1)2+~\beta
2 | |
~\sigma | |
t-1 |
where
~\alpha\geq0,~\beta\geq0,~\omega>0
~\alpha(1+~\theta2)+~\beta<1
~\theta
This model should not be confused with the NARCH model, together with the NGARCH extension, introduced by Higgins and Bera in 1992.[6]
Integrated Generalized Autoregressive Conditional heteroskedasticity (IGARCH) is a restricted version of the GARCH model, where the persistent parameters sum up to one, and imports a unit root in the GARCH process.[7] The condition for this is
p | |
\sum | |
i=1 |
~\betai
q~\alpha | |
+\sum | |
i |
=1
The exponential generalized autoregressive conditional heteroskedastic (EGARCH) model by Nelson & Cao (1991) is another form of the GARCH model. Formally, an EGARCH(p,q):
q | |
log\sigma | |
k=1 |
\betakg(Zt-k
p | |
)+\sum | |
k=1 |
\alphak
2 | |
log\sigma | |
t-k |
where
g(Zt)=\thetaZt+λ(|Zt|-E(|Zt|))
2 | |
\sigma | |
t |
\omega
\beta
\alpha
\theta
λ
Zt
g(Zt)
Zt
Since
2 | |
log\sigma | |
t |
The GARCH-in-mean (GARCH-M) model adds a heteroskedasticity term into the mean equation. It has the specification:
yt=~\betaxt+~λ~\sigmat+~\epsilont
The residual
~\epsilont
~\epsilont=~\sigmat~ x zt
The Quadratic GARCH (QGARCH) model by Sentana (1995) is used to model asymmetric effects of positive and negative shocks.
In the example of a GARCH(1,1) model, the residual process
~\sigmat
~\epsilont=~\sigmatzt
where
zt
2 | |
~\sigma | |
t |
=K+~\alpha
2 | |
~\epsilon | |
t-1 |
+~\beta
2 | |
~\sigma | |
t-1 |
+~\phi~\epsilont-1
Similar to QGARCH, the Glosten-Jagannathan-Runkle GARCH (GJR-GARCH) model by Glosten, Jagannathan and Runkle (1993) also models asymmetry in the ARCH process. The suggestion is to model
~\epsilont=~\sigmatzt
zt
2 | |
~\sigma | |
t |
=K+~\delta
2 | |
~\sigma | |
t-1 |
+~\alpha
2 | |
~\epsilon | |
t-1 |
+~\phi
2 | |
~\epsilon | |
t-1 |
It-1
where
It-1=0
~\epsilont-1\ge0
It-1=1
~\epsilont-1<0
The Threshold GARCH (TGARCH) model by Zakoian (1994) is similar to GJR GARCH. The specification is one on conditional standard deviation instead of conditional variance:
~\sigmat=K+~\delta~\sigmat-1+
+ | |
~\alpha | |
1 |
+ | |
~\epsilon | |
t-1 |
+
- | |
~\alpha | |
1 |
- | |
~\epsilon | |
t-1 |
where
+ | |
~\epsilon | |
t-1 |
=~\epsilont-1
~\epsilont-1>0
+ | |
~\epsilon | |
t-1 |
=0
~\epsilont-1\le0
- | |
~\epsilon | |
t-1 |
=~\epsilont-1
~\epsilont-1\le0
- | |
~\epsilon | |
t-1 |
=0
~\epsilont-1>0
Hentschel's fGARCH model,[10] also known as Family GARCH, is an omnibus model that nests a variety of other popular symmetric and asymmetric GARCH models including APARCH, GJR, AVGARCH, NGARCH, etc.
In 2004, Claudia Klüppelberg, Alexander Lindner and Ross Maller proposed a continuous-time generalization of the discrete-time GARCH(1,1) process. The idea is to start with the GARCH(1,1) model equations
\epsilont=\sigmatzt,
2 | |
\sigma | |
t |
=\alpha0+\alpha1
2 | |
\epsilon | |
t-1 |
+\beta1
2 | |
\sigma | |
t-1 |
=\alpha0+\alpha1
2 | |
\sigma | |
t-1 |
2 | |
z | |
t-1 |
+\beta1
2 | |
\sigma | |
t-1 |
,
and then to replace the strong white noise process
zt
dLt
(Lt)t\geq0
2 | |
z | |
t |
d | |
d[L,L] | |
t |
d | |
[L,L] | |
t |
=\sums\in[0,t](\Delta
2, | |
L | |
t) |
t\geq0,
is the purely discontinuous part of the quadratic variation process of
L
dGt=\sigmat-dLt,
2 | |
d\sigma | |
t |
=(\beta-η
2 | |
\sigma | |
t)dt |
+\varphi
2 | |
\sigma | |
t- |
d | |
d[L,L] | |
t, |
where the positive parameters
\beta
η
\varphi
\alpha0
\alpha1
\beta1
2 | |
(G | |
0) |
2 | |
(G | |
t) |
t\geq0
Unlike GARCH model, the Zero-Drift GARCH (ZD-GARCH) model by Li, Zhang, Zhu and Ling (2018) [12] lets the drift term
~\omega=0
~\epsilont=~\sigmatzt
zt
2 | |
~\sigma | |
t |
=~\alpha1
2 | |
~\epsilon | |
t-1 |
+~\beta1
2. | |
~\sigma | |
t-1 |
The ZD-GARCH model does not require
~\alpha1+~\beta1=1
~\omega=0
~\alpha1
~\beta1
Spatial GARCH processes by Otto, Schmid and Garthoff (2018) [13] are considered as the spatial equivalent to the temporal generalized autoregressive conditional heteroscedasticity (GARCH) models. In contrast to the temporal ARCH model, in which the distribution is known given the full information set for the prior periods, the distribution is not straightforward in the spatial and spatiotemporal setting due to the interdependence between neighboring spatial locations. The spatial model is given by
~\epsilon(si)=~\sigma(si)z(si)
2 | |
~\sigma(s | |
i) |
=~\alphai+
n | |
\sum | |
v=1 |
\rhowiv
2, | |
\epsilon(s | |
v) |
where
~si
i
~wiv
iv
wii=0
~i=1,...,n
In a different vein, the machine learning community has proposed the use of Gaussian process regression models to obtain a GARCH scheme.[14] This results in a nonparametric modelling scheme, which allows for: (i) advanced robustness to overfitting, since the model marginalises over its parameters to perform inference, under a Bayesian inference rationale; and (ii) capturing highly-nonlinear dependencies without increasing model complexity.