In statistics, best linear unbiased prediction (BLUP) is used in linear mixed models for the estimation of random effects. BLUP was derived by Charles Roy Henderson in 1950 but the term "best linear unbiased predictor" (or "prediction") seems not to have been used until 1962.[1] "Best linear unbiased predictions" (BLUPs) of random effects are similar to best linear unbiased estimates (BLUEs) (see Gauss–Markov theorem) of fixed effects. The distinction arises because it is conventional to talk about estimating fixed effects but about predicting random effects, but the two terms are otherwise equivalent. (This is a bit strange since the random effects have already been "realized"; they already exist. The use of the term "prediction" may be because in the field of animal breeding in which Henderson worked, the random effects were usually genetic merit, which could be used to predict the quality of offspring (Robinson[1] page 28)). However, the equations for the "fixed" effects and for the random effects are different.
In practice, it is often the case that the parameters associated with the random effect(s) term(s) are unknown; these parameters are the variances of the random effects and residuals. Typically the parameters are estimated and plugged into the predictor, leading to the empirical best linear unbiased predictor (EBLUP). Notice that by simply plugging in the estimated parameter into the predictor, additional variability is unaccounted for, leading to overly optimistic prediction variances for the EBLUP.
Best linear unbiased predictions are similar to empirical Bayes estimates of random effects in linear mixed models, except that in the latter case, where weights depend on unknown values of components of variance, these unknown variances are replaced by sample-based estimates.
Suppose that the model for observations is written as
Yj=\mu+
T\beta | |
x | |
j |
+\xij+\varepsilonj,
where
\mu
Y
\beta
The BLUP problem of providing an estimate of the observation-error-free value for the kth observation,
\widetilde{Y}k=\mu+
T\beta | |
x | |
k |
+\xik,
can be formulated as requiring that the coefficients of a linear predictor, defined as
\widehat{Y}k=
n | |
\sum | |
j=1 |
cj,kYj,
should be chosen so as to minimise the variance of the prediction error,
V=\operatorname{Var}(\widetilde{Y}k-\widehat{Y}k),
subject to the condition that the predictor is unbiased,
\operatorname{E}(\widetilde{Y}k-\widehat{Y}k)=0.
In contrast to the case of best linear unbiased estimation, the "quantity to be estimated",
\widetilde{Y}k
Yk
\widehat{Y}k
In contrast to BLUE, BLUP takes into account known or estimated variances.[2]
Henderson explored breeding from a statistical point of view. His work assisted the development of the selection index (SI) and estimated breeding value (EBV). These statistical methods influenced the artificial insemination stud rankings used in the United States. These early statistical methods are confused with the BLUP now common in livestock breeding.
The actual term BLUP originated out of work at the University of Guelph in Canada by Daniel Sorensen and Brian Kennedy, in which they extended Henderson's results to a model that includes several cycles of selection.[3] This model was popularized by the University of Guelph in the dairy industry under the name BLUP. Further work by the University showed BLUP's superiority over EBV and SI leading to it becoming the primary genetic predictor.
There is thus confusion between the BLUP model popularized above with the best linear unbiased prediction statistical method which was too theoretical for general use. The model was supplied for use on computers to farmers.
In Canada, all dairies report nationally. The genetics in Canada were shared making it the largest genetic pool and thus source of improvements. This and BLUP drove a rapid increase in Holstein cattle quality.