In statistics, the Bayesian information criterion (BIC) or Schwarz information criterion (also SIC, SBC, SBIC) is a criterion for model selection among a finite set of models; models with lower BIC are generally preferred. It is based, in part, on the likelihood function and it is closely related to the Akaike information criterion (AIC).
When fitting models, it is possible to increase the maximum likelihood by adding parameters, but doing so may result in overfitting. Both BIC and AIC attempt to resolve this problem by introducing a penalty term for the number of parameters in the model; the penalty term is larger in BIC than in AIC for sample sizes greater than 7.[1]
The BIC was developed by Gideon E. Schwarz and published in a 1978 paper,[2] where he gave a Bayesian argument for adopting it.
The BIC is formally defined as[3]
BIC=kln(n)-2ln(\widehatL).
\hatL
M
\hatL=p(x\mid\widehat\theta,M)
\widehat\theta
x
n
x
k
q
k=q+2
The BIC can be derived by integrating out the parameters of the model using Laplace's method, starting with the following model evidence:[4] [5]
p(x\midM)=\intp(x\mid\theta,M)\pi(\theta\midM)d\theta
where
\pi(\theta\midM)
\theta
M
The log-likelihood,
ln(p(x|\theta,M))
\widehat\theta
ln(p(x\mid\theta,M))=ln(\widehatL)-
n | |
2 |
(\theta-\widehat\theta)\operatorname{T
where
l{I}(\theta)
R(x,\theta)
R(x,\theta)
\pi(\theta\midM)
\widehat\theta
\theta
p(x\midM) ≈ \hatL{\left(
2\pi | |
n |
| ||||
\right)} |
| ||||
|l{I}(\widehat\theta)| |
\pi(\widehat\theta)
As
n
|l{I}(\widehat\theta)|
\pi(\widehat\theta)
O(1)
p(x\midM)=\exp\left(ln\widehatL-
k | |
2 |
ln(n)+O(1)\right)=\exp\left(-
BIC | |
2 |
+O(1)\right),
where BIC is defined as above, and
\widehatL
\pi(\theta\midM)
p(M\midx)\proptop(x\midM)p(M) ≈ \exp\left(-
BIC | |
2 |
\right)p(M)
When picking from several models, ones with lower BIC values are generally preferred. The BIC is an increasing function of the error variance
2 | |
\sigma | |
e |
It is important to keep in mind that the BIC can be used to compare estimated models only when the numerical values of the dependent variable are identical for all models being compared. The models being compared need not be nested, unlike the case when models are being compared using an F-test or a likelihood ratio test.
The BIC suffers from two main limitations[6]
n
k
Under the assumption that the model errors or disturbances are independent and identically distributed according to a normal distribution and the boundary condition that the derivative of the log likelihood with respect to the true variance is zero, this becomes (up to an additive constant, which depends only on n and not on the model):[7]
BIC=n
2}) | |
ln(\widehat{\sigma | |
e |
+kln(n)
where
2} | |
\widehat{\sigma | |
e |
2} | |
\widehat{\sigma | |
e |
=
1 | |
n |
n | |
\sum | |
i=1 |
(xi-\widehat{x
2. | |
i}) |
which is a biased estimator for the true variance.
In terms of the residual sum of squares (RSS) the BIC is
BIC=nln(RSS/n)+kln(n)
\chi2
BIC=\chi2+kln(n)
where
k