In econometrics, the truncated normal hurdle model is a variant of the Tobit model and was first proposed by Cragg in 1971.[1]
In a standard Tobit model, represented as
y=(x\beta+u)1[x\beta+u>0]
u|x\simN(0,\sigma2)
\partialP[y>0]/\partialxj=\varphi(x\beta/\sigma)\betaj/\sigma
\partial\operatornameE[y\midx,y>0]/\partialxj=\betaj\{1-\theta(x\beta/\sigma\}
xj
P[y>0]
\operatornameE[y\midx,y>0]
xh
xj
P[y>0]
\operatornameE[y\midx,y>0]
\partialP[y>0]/\partialxh | = | |
\partialP[y>0]/\partialxj |
\partial\operatornameE[y\midx,y>0]/\partialxh | |
\partial\operatornameE[y\midx,y>0]/\partialxj |
=
\betah | |
\betaj |
|
However, these two implicit assumptions are too strong and inconsistent with many contexts in economics. For instance, when we need to decide whether to invest and build a factory, the construction cost might be more influential than the product price; but once we have already built the factory, the product price is definitely more influential to the revenue. Hence, the implicit assumption (2) doesn't match this context.[4] The essence of this issue is that the standard Tobit implicitly models a very strong link between the participation decision
(y=0
y>0)
y
y>0
y=s\centerdotw,
s
w
s=1[x\beta+u>0];
w=x\beta+u.
To make the model compatible with more contexts, a natural improvement is to assume:
s=1[x\gamma+u>0],whereu\simN(0,1);
w=x\beta+e,
e
\varphi( ⋅ )/\Phi\left(
x\beta | |
\sigma |
\right)/\sigma;
s
w
x
This is called Truncated Normal Hurdle Model, which is proposed in Cragg (1971). By adding one more parameter and detach the amount decision with the participation decision, the model can fit more contexts. Under this model setup, the density of the
y
x
f(y\midx)=[1-\Phi(\chi\gamma)]1[y ⋅ \left[
\Phi (\chi\gamma) | |
\Phi(\chi\beta/\sigma) |
\left.\varphi\left(
y-\chi\beta | |
\sigma |
\right)\right/\sigma\right]1[y>0]
From this density representation, it is obvious that it will degenerate to the standard Tobit model when
\gamma=\beta/\sigma.
The Truncated Normal Hurdle Model is usually estimated through MLE. The log-likelihood function can be written as:
\begin{align} \ell(\beta,\gamma,\sigma)={}&
N | |
\sum | |
i=1 |
1[yi=0]log[1-\Phi(xi\gamma)]+1[yi>0]log[\Phi(xi\gamma)]\\[5pt] &{}+1[yi>0]\left[-log\left[\Phi\left(
xi\beta | |
\sigma |
\right)\right]+log\left(\varphi\left(
yi-xi\beta | |
\sigma |
\right)\right)-log(\sigma)\right] \end{align}
From the log-likelihood function,
\gamma
(\beta,\sigma)
\theta(x)=λ'
λ(x)=\varphi(\chi)/\Phi(\chi),