In statistics, a random effects model, also called a variance components model, is a statistical model where the model parameters are random variables. It is a kind of hierarchical linear model, which assumes that the data being analysed are drawn from a hierarchy of different populations whose differences relate to that hierarchy. A random effects model is a special case of a mixed model.
Contrast this to the biostatistics definitions,[1] [2] [3] [4] [5] as biostatisticians use "fixed" and "random" effects to respectively refer to the population-average and subject-specific effects (and where the latter are generally assumed to be unknown, latent variables).
Random effect models assist in controlling for unobserved heterogeneity when the heterogeneity is constant over time and not correlated with independent variables. This constant can be removed from longitudinal data through differencing, since taking a first difference will remove any time invariant components of the model.[6]
Two common assumptions can be made about the individual specific effect: the random effects assumption and the fixed effects assumption. The random effects assumption is that the individual unobserved heterogeneity is uncorrelated with the independent variables. The fixed effect assumption is that the individual specific effect is correlated with the independent variables.
If the random effects assumption holds, the random effects estimator is more efficient than the fixed effects model.
Suppose m large elementary schools are chosen randomly from among thousands in a large country. Suppose also that n pupils of the same age are chosen randomly at each selected school. Their scores on a standard aptitude test are ascertained. Let Yij be the score of the jth pupil at the ith school. A simple way to model this variable is
Yij=\mu+Ui+Wij,
The model can be augmented by including additional explanatory variables, which would capture differences in scores among different groups. For example:
Yij=\mu+\beta1Sexij+\beta2ParentsEducij+Ui+Wij,
The variance of Yij is the sum of the variances τ2 and σ2 of Ui and Wij respectively.
Let
\overline{Y}i\bullet=
1 | |
n |
n | |
\sum | |
j=1 |
Yij
\overline{Y}\bullet\bullet=
1 | |
mn |
n | |
\sum | |
j=1 |
Yij
be the grand average.
Let
SSW=
n | |
\sum | |
j=1 |
(Yij-\overline{Y}i\bullet)2
SSB=
m | |
n\sum | |
i=1 |
(\overline{Y}i\bullet-\overline{Y}\bullet\bullet)2
be respectively the sum of squares due to differences within groups and the sum of squares due to difference between groups. Then it can be shown that
1 | |
m(n-1) |
E(SSW)=\sigma2
and
1 | |
(m-1)n |
E(SSB)=
\sigma2 | |
n |
+\tau2.
These "expected mean squares" can be used as the basis for estimation of the "variance components" σ2 and τ2.
The σ2 parameter is also called the intraclass correlation coefficient.
For random effects models the marginal likelihoods are important.[7]
Random effects models used in practice include the Bühlmann model of insurance contracts and the Fay-Herriot model used for small area estimation.