Random effects model explained

In statistics, a random effects model, also called a variance components model, is a statistical model where the model parameters are random variables. It is a kind of hierarchical linear model, which assumes that the data being analysed are drawn from a hierarchy of different populations whose differences relate to that hierarchy. A random effects model is a special case of a mixed model.

Contrast this to the biostatistics definitions,[1] [2] [3] [4] [5] as biostatisticians use "fixed" and "random" effects to respectively refer to the population-average and subject-specific effects (and where the latter are generally assumed to be unknown, latent variables).

Qualitative description

Random effect models assist in controlling for unobserved heterogeneity when the heterogeneity is constant over time and not correlated with independent variables. This constant can be removed from longitudinal data through differencing, since taking a first difference will remove any time invariant components of the model.[6]

Two common assumptions can be made about the individual specific effect: the random effects assumption and the fixed effects assumption. The random effects assumption is that the individual unobserved heterogeneity is uncorrelated with the independent variables. The fixed effect assumption is that the individual specific effect is correlated with the independent variables.

If the random effects assumption holds, the random effects estimator is more efficient than the fixed effects model.

Simple example

Suppose m large elementary schools are chosen randomly from among thousands in a large country. Suppose also that n pupils of the same age are chosen randomly at each selected school. Their scores on a standard aptitude test are ascertained. Let Yij be the score of the jth pupil at the ith school. A simple way to model this variable is

Yij=\mu+Ui+Wij,

where μ is the average test score for the entire population. In this model Ui is the school-specific random effect: it measures the difference between the average score at school i and the average score in the entire country. The term Wij is the individual-specific random effect, i.e., it's the deviation of the j-th pupil's score from the average for the i-th school.

The model can be augmented by including additional explanatory variables, which would capture differences in scores among different groups. For example:

Yij=\mu+\beta1Sexij+\beta2ParentsEducij+Ui+Wij,

where Sexij is a binary dummy variable and ParentsEducij records, say, the average education level of a child's parents. This is a mixed model, not a purely random effects model, as it introduces fixed-effects terms for Sex and Parents' Education.

Variance components

The variance of Yij is the sum of the variances τ2 and σ2 of Ui and Wij respectively.

Let

\overline{Y}i\bullet=

1
n
n
\sum
j=1

Yij

be the average, not of all scores at the ith school, but of those at the ith school that are included in the random sample. Let

\overline{Y}\bullet\bullet=

1
mn
n
\sum
j=1

Yij

be the grand average.

Let

SSW=

n
\sum
j=1

(Yij-\overline{Y}i\bullet)2

SSB=

m
n\sum
i=1

(\overline{Y}i\bullet-\overline{Y}\bullet\bullet)2

be respectively the sum of squares due to differences within groups and the sum of squares due to difference between groups. Then it can be shown that

1
m(n-1)

E(SSW)=\sigma2

and

1
(m-1)n

E(SSB)=

\sigma2
n

+\tau2.

These "expected mean squares" can be used as the basis for estimation of the "variance components" σ2 and τ2.

The σ2 parameter is also called the intraclass correlation coefficient.

Marginal Likelihood

For random effects models the marginal likelihoods are important.[7]

Applications

Random effects models used in practice include the Bühlmann model of insurance contracts and the Fay-Herriot model used for small area estimation.

See also

Further reading

External links

Notes and References

  1. Book: Peter J. . Diggle . Patrick . Heagerty . Kung-Yee . Liang . Scott L. . Zeger . 2002 . Analysis of Longitudinal Data . limited . 2nd . Oxford University Press . 169–171 . 0-19-852484-6 .
  2. Book: Garrett M. . Fitzmaurice . Nan M. . Laird . James H. . Ware . 2004 . Applied Longitudinal Analysis . Hoboken . John Wiley & Sons . 326–328 . 0-471-21487-6 .
  3. Nan M. . Laird . James H. . Ware . 1982 . Random-Effects Models for Longitudinal Data . . 38 . 4 . 963–974 . 2529876 . 10.2307/2529876 . 7168798 .
  4. Joseph C. . Gardiner . Zhehui . Luo . Lee Anne . Roman . 2009 . Fixed effects, random effects and GEE: What are the differences? . . 28 . 2 . 221–239 . 10.1002/sim.3478 . 19012297 .
  5. Gomes . Dylan G.E. . Should I use fixed effects or random effects when I have fewer than five levels of a grouping factor in a mixed-effects model? . PeerJ . 20 January 2022 . 10 . e12794 . 10.7717/peerj.12794. free . 35116198 . 8784019 .
  6. Book: Wooldridge, Jeffrey. Econometric analysis of cross section and panel data. 2010. MIT Press. 9780262232586. 2nd. Cambridge, Mass.. 252. 627701062.
  7. Hedeker, D., Gibbons, R. D. (2006). Longitudinal Data Analysis. Deutschland: Wiley. Page 163 https://books.google.de/books?id=f9p9iIgzQSQC&pg=PA163