Correlogram Explained

Correlogram should not be confused with Scatterplot.

In the analysis of data, a correlogram is a chart of correlation statistics. For example, in time series analysis, a plot of the sample autocorrelations

rh

versus

h

(the time lags) is an autocorrelogram. If cross-correlation is plotted, the result is called a cross-correlogram.

The correlogram is a commonly used tool for checking randomness in a data set. If random, autocorrelations should be near zero for any and all time-lag separations. If non-random, then one or more of the autocorrelations will be significantly non-zero.

In addition, correlograms are used in the model identification stage for Box–Jenkins autoregressive moving average time series models. Autocorrelations should be near-zero for randomness; if the analyst does not check for randomness, then the validity of many of the statistical conclusions becomes suspect. The correlogram is an excellent way of checking for such randomness.

In multivariate analysis, correlation matrices shown as color-mapped images may also be called "correlograms" or "corrgrams".[1] [2] [3]

Applications

The correlogram can help provide answers to the following questions:[4]

Y=constant+error

valid and sufficient?

s\bar{Y

}=s/\sqrt valid?

Importance

Randomness (along with fixed model, fixed variation, and fixed distribution) is one of the four assumptions that typically underlie all measurement processes. The randomness assumption is critically important for the following three reasons:

s\bar{Y

}=s/\sqrtwhere s is the standard deviation of the data. Although heavily used, the results from using this formula are of no value unless the randomness assumption holds.

Y=constant+error

If the data are not random, this model is incorrect and invalid, and the estimates for the parameters (such as the constant) become nonsensical and invalid.

Estimation of autocorrelations

The autocorrelation coefficient at lag h is given by

rh=ch/c0

where ch is the autocovariance function

ch=

1
N
N-h
\sum
t=1

\left(Yt-\bar{Y}\right)\left(Yt+h-\bar{Y}\right)

and c0 is the variance function

c0=

1
N
N
\sum
t=1

\left(Yt-\bar{Y}\right)2

The resulting value of rh will range between −1 and +1.

Alternate estimate

Some sources may use the following formula for the autocovariance function:

ch=

1
N-h
N-h
\sum
t=1

\left(Yt-\bar{Y}\right)\left(Yt+h-\bar{Y}\right)

Although this definition has less bias, the (1/N) formulation has some desirable statistical properties and is the form most commonly used in the statistics literature. See pages 20 and 49–50 in Chatfield for details.

In contrast to the definition above, this definition allows us to compute

ch

in a slightly more intuitive way. Consider the sample

Y1,...,YN

, where

Yi\inRn

for

i=1,...,N

. Then, let

X=\begin{bmatrix}Y1-\barY&&YN-\barY\end{bmatrix}\inRn

We then compute the Gram matrix

Q=X\topX

. Finally,

ch

is computed as the sample mean of the

h

th diagonal of

Q

. For example, the

0

th diagonal (the main diagonal) of

Q

has

N

elements, and its sample mean corresponds to

c0

. The

1

st diagonal (to the right of the main diagonal) of

Q

has

N-1

elements, and its sample mean corresponds to

c1

, and so on.

Statistical inference with correlograms

In the same graph one can draw upper and lower bounds for autocorrelation with significance level

\alpha

:

B=\pmz1-\alpha/2SE(rh)

with

rh

as the estimated autocorrelation at lag

h

.

If the autocorrelation is higher (lower) than this upper (lower) bound, the null hypothesis that there is no autocorrelation at and beyond a given lag is rejected at a significance level of

\alpha

. This test is an approximate one and assumes that the time-series is Gaussian.

In the above, z1−α/2 is the quantile of the normal distribution; SE is the standard error, which can be computed by Bartlett's formula for MA() processes:

SE(r
1)=1
\sqrt{N
}
SE(r
h)=\sqrt
h-1
1+2\sum
2
r
i
i=1
N
for

h>1.

In the example plotted, we can reject the null hypothesis that there is no autocorrelation between time-points which are separated by lags up to 4. For most longer periods one cannot reject the null hypothesis of no autocorrelation.

Note that there are two distinct formulas for generating the confidence bands:

1. If the correlogram is being used to test for randomness (i.e., there is no time dependence in the data), the following formula is recommended:

\pm

z1-\alpha/2
\sqrt{N
}where N is the sample size, z is the quantile function of the standard normal distribution and α is the significance level. In this case, the confidence bands have fixed width that depends on the sample size.

2. Correlograms are also used in the model identification stage for fitting ARIMA models. In this case, a moving average model is assumed for the data and the following confidence bands should be generated:

\pmz1-\alpha/2\sqrt{

1
N
k
\left(1+2\sum
i=1
2\right)}
r
i
where k is the lag. In this case, the confidence bands increase as the lag increases.

Software

Correlograms are available in most general purpose statistical libraries.

Correlograms:

functions acf and pacf

Corrgrams:

corrgram[2] [3]

Related techniques

Further reading

External links

Notes and References

  1. Friendly . Michael . 19 August 2002 . Corrgrams: Exploratory displays for correlation matrices . . . 56 . 4 . 316–324 . 10.1198/000313002533 . 19 January 2014.
  2. Web site: CRAN – Package corrgram . . 29 August 2013 . cran.r-project.org . 19 January 2014.
  3. Web site: Quick-R: Correlograms . . statmethods.net . 19 January 2014.
  4. Web site: 1.3.3.1. Autocorrelation Plot. www.itl.nist.gov. 2018-08-20.
  5. Web site: Visualization § Autocorrelation plot.