In time series analysis, the lag operator (L) or backshift operator (B) operates on an element of a time series to produce the previous element. For example, given some time series
X=\{X1,X2,...\}
then
LXt=Xt-1
t>1
or similarly in terms of the backshift operator B:
BXt=Xt-1
t>1
Xt=LXt+1
t\geq1
The lag operator (as well as backshift operator) can be raised to arbitrary integer powers so that
L-1Xt=Xt+1
and
LkXt=Xt-k.
Polynomials of the lag operator can be used, and this is a common notation for ARMA (autoregressive moving average) models. For example,
\varepsilont=Xt-
p | |
\sum | |
i=1 |
\varphiiXt-i=\left(1-
p | |
\sum | |
i=1 |
\varphiiLi\right)Xt
specifies an AR(p) model.
A polynomial of lag operators is called a lag polynomial so that, for example, the ARMA model can be concisely specified as
\varphi(L)Xt=\theta(L)\varepsilont
where
\varphi(L)
\theta(L)
\varphi(L)=1-
p | |
\sum | |
i=1 |
\varphiiLi
and
\theta(L)=1+
q | |
\sum | |
i=1 |
\thetaiLi.
Polynomials of lag operators follow similar rules of multiplication and division as do numbers and polynomials of variables. For example,
Xt=
\theta(L) | |
\varphi(L) |
\varepsilont,
means the same thing as
\varphi(L)Xt=\theta(L)\varepsilont.
As with polynomials of variables, a polynomial in the lag operator can be divided by another one using polynomial long division. In general dividing one such polynomial by another, when each has a finite order (highest exponent), results in an infinite-order polynomial.
An annihilator operator, denoted
[ ]+
Note that
\varphi\left(1\right)
\varphi\left(1\right)=1-
p | |
\sum | |
i=1 |
\varphii
See main article: Finite difference. In time series analysis, the first difference operator :
\Delta
\begin{align} \DeltaXt&=Xt-Xt-1\\ \DeltaXt&=(1-L)Xt~. \end{align}
Similarly, the second difference operator works as follows:
\begin{align} \Delta(\DeltaXt)&=\DeltaXt-\DeltaXt-1\\ \Delta2Xt&=(1-L)\DeltaXt\\ \Delta2Xt&=(1-L)(1-L)Xt\\ \Delta2Xt&=(1-L)2Xt~. \end{align}
The above approach generalises to the i-th difference operator
\DeltaiXt=(1-L)iXt .
It is common in stochastic processes to care about the expected value of a variable given a previous information set. Let
\Omegat
E[Xt+j|\Omegat]=Et[Xt+j].
With these time-dependent conditional expectations, there is the need to distinguish between the backshift operator (B) that only adjusts the date of the forecasted variable and the Lag operator (L) that adjusts equally the date of the forecasted variable and the information set:
LnEt[Xt+j]=Et-n[Xt+j-n],
BnEt[Xt+j]=Et[Xt+j-n].