Control functions (also known as two-stage residual inclusion) are statistical methods to correct for endogeneity problems by modelling the endogeneity in the error term. The approach thereby differs in important ways from other models that try to account for the same econometric problem. Instrumental variables, for example, attempt to model the endogenous variable X as an often invertible model with respect to a relevant and exogenous instrument Z. Panel analysis uses special data properties to difference out unobserved heterogeneity that is assumed to be fixed over time.
Control functions were introduced by Heckman and Robb[1] although the principle can be traced back to earlier papers.[2] A particular reason why they are popular is because they work for non-invertible models (such as discrete choice models) and allow for heterogeneous effects, where effects at the individual level can differ from effects at the aggregate.[3] A well-known example of the control function approach is the Heckman correction.
Assume we start from a standard endogenous variable setup with additive errors, where X is an endogenous variable, and Z is an exogenous variable that can serve as an instrument.
A popular instrumental variable approach is to use a two-step procedure and estimate equation first and then use the estimates of this first step to estimate equation in a second step. The control function, however, uses that this model implies
The function h(V) is effectively the control function that models the endogeneity and where this econometric approach lends its name from.[4]
In a Rubin causal model potential outcomes framework, where Y1 is the outcome variable of people for who the participation indicator D equals 1, the control function approach leads to the following model
as long as the potential outcomes Y0 and Y1 are independent of D conditional on X and Z.[5]
Since the second-stage regression includes generated regressors, its variance-covariance matrix needs to be adjusted.[6] [7]
Wooldridge and Terza provide a methodology to both deal with and test for endogeneity within the exponential regression framework, which the following discussion follows closely.[8] While the example focuses on a Poisson regression model, it is possible to generalize to other exponential regression models, although this may come at the cost of additional assumptions (e.g. for binary response or censored data models).
Assume the following exponential regression model, where
ai
ai
xi
xi
ai
zi
\operatornameE[yi\midxi,zi,ai]=\exp(xib0+zic0+ai)
The variables
zi
xi
xi
The usual rank condition is needed to ensure identification. The endogeneity is then modeled in the following way, where
\rho
vi
ei
ai=vi\rho+ei
Imposing these assumptions, assuming the models are correctly specified, and normalizing
\operatornameE[\exp(ei)]=1
If
vi
\hat\rho
The original Heckit procedure makes distributional assumptions about the error terms, however, more flexible estimation approaches with weaker distributional assumptions have been established.[9] Furthermore, Blundell and Powell show how the control function approach can be particularly helpful in models with nonadditive errors, such as discrete choice models.[10] This latter approach, however, does implicitly make strong distributional and functional form assumptions.