Panel analysis explained

Panel (data) analysis is a statistical method, widely used in social science, epidemiology, and econometrics to analyze two-dimensional (typically cross sectional and longitudinal) panel data.[1] The data are usually collected over time and over the same individuals and then a regression is run over these two dimensions. Multidimensional analysis is an econometric method in which data are collected over more than two dimensions (typically, time, individuals, and some third dimension).[2]

A common panel data regression model looks like

yit=a+bxit+\varepsilonit

, where

y

is the dependent variable,

x

is the independent variable,

a

and

b

are coefficients,

i

and

t

are indices for individuals and time. The error

\varepsilonit

is very important in this analysis. Assumptions about the error term determine whether we speak of fixed effects or random effects. In a fixed effects model,

\varepsilonit

is assumed to vary non-stochastically over

i

or

t

making the fixed effects model analogous to a dummy variable model in one dimension. In a random effects model,

\varepsilonit

is assumed to vary stochastically over

i

or

t

requiring special treatment of the error variance matrix.[3]

Panel data analysis has three more-or-less independent approaches:

The selection between these methods depends upon the objective of the analysis, and the problems concerning the exogeneity of the explanatory variables.

Independently pooled panels

See also: Partial likelihood methods for panel data. Key assumption:
There are no unique attributes of individuals within the measurement set, and no universal effects across time.

Fixed effect models

Key assumption:
There are unique attributes of individuals that do not vary over time. That is, the unique attributes for a given individual

i

are time

t

invariant. These attributes may or may not be correlated with the individual dependent variables yi. To test whether fixed effects, rather than random effects, is needed, the Durbin–Wu–Hausman test can be used.

Random effects models

See main article: Random effects model. Key assumption:
There are unique, time constant attributes of individuals that are not correlated with the individual regressors. Pooled OLS can be used to derive unbiased and consistent estimates of parameters even when time constant attributes are present, but random effects will be more efficient.

Random effects model is a feasible generalised least squares technique which is asymptotically more efficient than Pooled OLS when time constant attributes are present. Random effects adjusts for the serial correlation which is induced by unobserved time constant attributes.

Models with instrumental variables

In the standard random effects (RE) and fixed effects (FE) models, independent variables are assumed to be uncorrelated with error terms. Provided the availability of valid instruments, RE and FE methods extend to the case where some of the explanatory variables are allowed to be endogenous. As in the exogenous setting, RE model with Instrumental Variables (REIV) requires more stringent assumptions than FE model with Instrumental Variables (FEIV) but it tends to be more efficient under appropriate conditions.[4]

To fix ideas, consider the following model:

yit=xit\beta+ci+uit

where

ci

is unobserved unit-specific time-invariant effect (call it unobserved effect) and

xit

can be correlated with

uis

for s possibly different from t. Suppose there exists a set of valid instruments

zi=(zi1,\ldots,zit)

.

In REIV setting, key assumptions include that

zi

is uncorrelated with

ci

as well as

uit

for

t=1,\ldots,T

. In fact, for REIV estimator to be efficient, conditions stronger than uncorrelatedness between instruments and unobserved effect are necessary.

On the other hand, FEIV estimator only requires that instruments be exogenous with error terms after conditioning on unobserved effect i.e.

E[uit\midzi,ci]=0[1]

. The FEIV condition allows for arbitrary correlation between instruments and unobserved effect. However, this generality does not come for free: time-invariant explanatory and instrumental variables are not allowed. As in the usual FE method, the estimator uses time-demeaned variables to remove unobserved effect. Therefore, FEIV estimator would be of limited use if variables of interest include time-invariant ones.

The above discussion has parallel to the exogenous case of RE and FE models. In the exogenous case, RE assumes uncorrelatedness between explanatory variables and unobserved effect, and FE allows for arbitrary correlation between the two. Similar to the standard case, REIV tends to be more efficient than FEIV provided that appropriate assumptions hold.

Dynamic panel models

See also: Dynamic unobserved effects model. In contrast to the standard panel data model, a dynamic panel model also includes lagged values of the dependent variable as regressors. For example, including one lag of the dependent variable generates:

yit=a+bxit+\rhoyit-1+\varepsilonit

The assumptions of the fixed effect and random effect models are violated in this setting. Instead, practitioners use a technique like the Arellano–Bond estimator.

See also

Notes and References

  1. Book: Maddala, G. S. . G. S. Maddala. 2001 . Introduction to Econometrics . New York . Wiley . Third . 0-471-49728-2 .
  2. Davies . A. . Lahiri . K. . 1995 . A New Framework for Testing Rationality and Measuring Aggregate Shocks Using Panel Data . . 68 . 1 . 205–227 . 10.1016/0304-4076(94)01649-K .
  3. Book: Hsiao . C. . K. . Lahiri . L. . Lee . M. H. . 3 . Pesaran . 1999 . Analysis of Panels and Limited Dependent Variable Models . Cambridge . Cambridge University Press . 0-521-63169-6 .
  4. Wooldridge, J.M., Econometric Analysis of Cross Section and Panel Data, MIT Press, Cambridge, Mass.