In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables.[1] The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. It can be used, for example, to estimate a mixture of gaussians, or to solve the multiple linear regression problem.[2]
The EM algorithm was explained and given its name in a classic 1977 paper by Arthur Dempster, Nan Laird, and Donald Rubin.[3] They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies by Cedric Smith.[4] Another was proposed by H.O. Hartley in 1958, and Hartley and Hocking in 1977, from which many of the ideas in the Dempster–Laird–Rubin paper originated.[5] Another one by S.K Ng, Thriyambakam Krishnan and G.J McLachlan in 1977. Hartley’s ideas can be broadened to any grouped discrete distribution. A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers,[6] [7] [8] following his collaboration with Per Martin-Löf and Anders Martin-Löf.[9] [10] [11] [12] [13] The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis. See also Meng and van Dyk (1997).
The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published by C. F. Jeff Wu in 1983.[14] Wu's proof established the EM method's convergence also outside of the exponential family, as claimed by Dempster–Laird–Rubin.
The EM algorithm is used to find (local) maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. Typically these models involve latent variables in addition to unknown parameters and known data observations. That is, either missing values exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a mixture model can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs.
Finding a maximum likelihood solution typically requires taking the derivatives of the likelihood function with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation.
The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven in this context. Additionally, it can be proven that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a local maximum or a saddle point. In general, multiple maxima may occur, with no guarantee that the global maximum will be found. Some likelihoods also have singularities in them, i.e., nonsensical maxima. For example, one of the solutions that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points.
Given the statistical model which generates a set
X
Z
\boldsymbol\theta
L(\boldsymbol\theta;X,Z)=p(X,Z\mid\boldsymbol\theta)
L(\boldsymbol\theta;X)=p(X\mid\boldsymbol\theta)=\intp(X,Z\mid\boldsymbol\theta)dZ=\intp(X\midZ,\boldsymbol\theta)p(Z\mid\boldsymbol\theta)dZ
However, this quantity is often intractable since
Z
Z
\boldsymbol\theta
The EM algorithm seeks to find the maximum likelihood estimate of the marginal likelihood by iteratively applying these two steps:
Expectation step (E step): Define
Q(\boldsymbol\theta\mid\boldsymbol\theta(t))
\boldsymbol\theta
Z
X
\boldsymbol\theta(t)
Q(\boldsymbol\theta\mid\boldsymbol\theta(t))=
\operatorname{E} | |
Z\simp( ⋅ |X,\boldsymbol\theta(t)) |
\left[logp(X,Z|\boldsymbol\theta)\right]
Maximization step (M step): Find the parameters that maximize this quantity:
\boldsymbol\theta(t+1)=\underset{\boldsymbol\theta}{\operatorname{argmax}} Q(\boldsymbol\theta\mid\boldsymbol\theta(t))
More succinctly, we can write it as one equation:
The typical models to which EM is applied use
Z
X
Z
However, it is possible to apply EM to other sorts of models.
The motivation is as follows. If the value of the parameters
\boldsymbol\theta
Z
Z
Z
Z
\boldsymbol\theta
\boldsymbol\theta
Z
\boldsymbol\theta
Z
\boldsymbol\theta
Z
\boldsymbol\theta
The algorithm as just described monotonically approaches a local minimum of the cost function.
Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a maximum likelihood estimator. For multimodal distributions, this means that an EM algorithm may converge to a local maximum of the observed data likelihood function, depending on starting values. A variety of heuristic or metaheuristic approaches exist to escape a local maximum, such as random-restart hill climbing (starting with several different random initial estimates
\boldsymbol\theta(t)
EM is especially useful when the likelihood is an exponential family, see Sundberg (2019, Ch. 8) for a comprehensive treatment:[15] the E step becomes the sum of expectations of sufficient statistics, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive closed-form expression updates for each step, using the Sundberg formula[16] (proved and published by Rolf Sundberg, based on unpublished results of Per Martin-Löf and Anders Martin-Löf).[7] [8] [10] [11] [12] [13]
The EM method was modified to compute maximum a posteriori (MAP) estimates for Bayesian inference in the original paper by Dempster, Laird, and Rubin.
Other methods exist to find maximum likelihood estimates, such as gradient descent, conjugate gradient, or variants of the Gauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
Expectation-Maximization works to improve
Q(\boldsymbol\theta\mid\boldsymbol\theta(t))
logp(X\mid\boldsymbol\theta)
For any
Z
p(Z\midX,\boldsymbol\theta)
logp(X\mid\boldsymbol\theta)=logp(X,Z\mid\boldsymbol\theta)-logp(Z\midX,\boldsymbol\theta).
Z
\theta(t)
p(Z\midX,\boldsymbol\theta(t))
Z
\begin{align} logp(X\mid\boldsymbol\theta)& =\sumZp(Z\midX,\boldsymbol\theta(t))logp(X,Z\mid\boldsymbol\theta) -\sumZp(Z\midX,\boldsymbol\theta(t))logp(Z\midX,\boldsymbol\theta)\\ &=Q(\boldsymbol\theta\mid\boldsymbol\theta(t))+H(\boldsymbol\theta\mid\boldsymbol\theta(t)), \end{align}
H(\boldsymbol\theta\mid\boldsymbol\theta(t))
\boldsymbol\theta
\boldsymbol\theta=\boldsymbol\theta(t)
logp(X\mid\boldsymbol\theta(t)) =Q(\boldsymbol\theta(t)\mid\boldsymbol\theta(t))+H(\boldsymbol\theta(t)\mid\boldsymbol\theta(t)),
logp(X\mid\boldsymbol\theta)-logp(X\mid\boldsymbol\theta(t)) =Q(\boldsymbol\theta\mid\boldsymbol\theta(t))-Q(\boldsymbol\theta(t)\mid\boldsymbol\theta(t)) +H(\boldsymbol\theta\mid\boldsymbol\theta(t))-H(\boldsymbol\theta(t)\mid\boldsymbol\theta(t)).
H(\boldsymbol\theta\mid\boldsymbol\theta(t))\geH(\boldsymbol\theta(t)\mid\boldsymbol\theta(t))
logp(X\mid\boldsymbol\theta)-logp(X\mid\boldsymbol\theta(t)) \geQ(\boldsymbol\theta\mid\boldsymbol\theta(t))-Q(\boldsymbol\theta(t)\mid\boldsymbol\theta(t)).
\boldsymbol\theta
Q(\boldsymbol\theta\mid\boldsymbol\theta(t))
logp(X\mid\boldsymbol\theta)
The EM algorithm can be viewed as two alternating maximization steps, that is, as an example of coordinate descent.[18] [19] Consider the function:
F(q,\theta):=\operatorname{E}q[logL(\theta;x,Z)]+H(q),
F(q,\theta)=-DKL(q\parallelpZ\mid( ⋅ \midx;\theta))+logL(\theta;x),
pZ\mid( ⋅ \midx;\theta)
x
DKL
Then the steps in the EM algorithm may be viewed as:
Expectation step: Choose
q
F
q(t)=\operatorname{argmax}q F(q,\theta(t))
Maximization step: Choose
\theta
F
\theta(t+1)=\operatorname{argmax}\theta F(q(t),\theta)
A Kalman filter is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems.
Filtering and smoothing EM algorithms arise by repeating this two-step procedure:
Suppose that a Kalman filter or minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from the maximum likelihood calculation
2 | |
\widehat{\sigma} | |
v |
=
1 | |
N |
N | |
\sum | |
k=1 |
{(zk-\widehat{x}
2, | |
k)} |
where
\widehat{x}k
zk
2 | |
\widehat{\sigma} | |
w |
=
1 | |
N |
N | |
\sum | |
k=1 |
{(\widehat{x}k+1
2, | |
-\widehat{F}\widehat{{x}} | |
k)} |
where
\widehat{x}k
\widehat{x}k+1
\widehat{F}=
| ||||||||||
k+1 |
-\widehat{F}
N | |
\widehat{x} | |
k=1 |
2}. | |
\widehat{x} | |
k |
The convergence of parameter estimates such as those above are well studied.[25] [26] [27] [28]
A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those using conjugate gradient and modified Newton's methods (Newton–Raphson).[29] Also, EM can be used with constrained estimation methods.
Parameter-expanded expectation maximization (PX-EM) algorithm often provides speed up by "us[ing] a `covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data".[30]
Expectation conditional maximization (ECM) replaces each M step with a sequence of conditional maximization (CM) steps in which each parameter θi is maximized individually, conditionally on the other parameters remaining fixed.[31] Itself can be extended into the Expectation conditional maximization either (ECME) algorithm.[32]
This idea is further extended in generalized expectation maximization (GEM) algorithm, in which is sought only an increase in the objective function F for both the E step and M step as described in the As a maximization–maximization procedure section.[18] GEM is further developed in a distributed environment and shows promising results.[33]
It is also possible to consider the EM algorithm as a subclass of the MM (Majorize/Minimize or Minorize/Maximize, depending on context) algorithm,[34] and therefore use any machinery developed in the more general case.
The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm[35] which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm by Yasuo Matsuyama is an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM.[36]
EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a probability distribution over the latent variables (in the Bayesian style) together with a point estimate for θ (either a maximum likelihood estimate or a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution over θ and the latent variables. The Bayesian approach to inference is simply to treat θ as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above (variational Bayes), solving can iterate over each latent variable (now including θ) and optimize them one at a time. Now, k steps per iteration are needed, where k is the number of latent variables. For graphical models this is easy to do as each variable's new Q depends only on its Markov blanket, so local message passing can be used for efficient inference.
In information geometry, the E step and the M step are interpreted as projections under dual affine connections, called the e-connection and the m-connection; the Kullback–Leibler divergence can also be understood in these terms.
Let
x=(x1,x2,\ldots,xn)
n
d
z=(z1,z2,\ldots,zn)
Xi\mid(Zi=1)\siml{N}d(\boldsymbol{\mu}1,\Sigma1)
Xi\mid(Zi=2)\siml{N}d(\boldsymbol{\mu}2,\Sigma2),
\operatorname{P}(Zi=1)=\tau1
\operatorname{P}(Zi=2)=\tau2=1-\tau1.
The aim is to estimate the unknown parameters representing the mixing value between the Gaussians and the means and covariances of each:
\theta=(\boldsymbol{\tau},\boldsymbol{\mu}1,\boldsymbol{\mu}2,\Sigma1,\Sigma2),
L(\theta;x)=
n | |
\prod | |
i=1 |
2 | |
\sum | |
j=1 |
\tauj f(xi;\boldsymbol{\mu}j,\Sigmaj),
and the complete-data likelihood function is
L(\theta;x,z)=p(x,z\mid\theta)=
n | |
\prod | |
i=1 |
2 | |
\prod | |
j=1 |
[f(xi;\boldsymbol{\mu}j,\Sigmaj)
I(zi=j) | |
\tau | |
j] |
,
or
L(\theta;x,z)=\exp\left\{
n | |
\sum | |
i=1 |
2 | |
\sum | |
j=1 |
I(zi=j)[log\tauj-\tfrac{1}{2}log|\Sigmaj|-\tfrac{1}{2}(xi-\boldsymbol{\mu}
-1 | |
j |
(xi-\boldsymbol{\mu}j)-\tfrac{d}{2}log(2\pi)]\right\},
where
I
f
In the last equality, for each, one indicator
I(zi=j)
Given our current estimate of the parameters θ(t), the conditional distribution of the Zi is determined by Bayes theorem to be the proportional height of the normal density weighted by τ:
(t) | |
T | |
j,i |
:=\operatorname{P}(Zi=j\midXi=xi;\theta(t))=
| ||||||||||
j |
(t)
(t) | |
,\Sigma | |
j |
(t) | |
)}{\tau | |
1 |
f(xi;\boldsymbol{\mu}
(t) | |
1 |
(t) | |
,\Sigma | |
1 |
)+
(t) | |
\tau | |
2 |
f(xi;\boldsymbol{\mu}
(t) | |
2 |
(t) | |
,\Sigma | |
2 |
)}.
These are called the "membership probabilities", which are normally considered the output of the E step (although this is not the Q function of below).
This E step corresponds with setting up this function for Q:
\begin{align}Q(\theta\mid\theta(t)) &=
\operatorname{E} | |
Z\midX=x;\theta(t) |
[logL(\theta;x,Z)]\\ &=
\operatorname{E} | |
Z\midX=x;\theta(t) |
[log
n | |
\prod | |
i=1 |
L(\theta;xi,Zi)]\\ &=
\operatorname{E} | |
Z\midX=x;\theta(t) |
n | |
[\sum | |
i=1 |
logL(\theta;xi,Zi)]\\ &=
n\operatorname{E} | |||||||||||||
\sum | |||||||||||||
|
[logL(\theta;xi,Zi)]\\ &=
n | |
\sum | |
i=1 |
2 | |
\sum | |
j=1 |
P(Zi=j\midXi=xi;\theta(t))logL(\thetaj;xi,j)\\ &=
n | |
\sum | |
i=1 |
2 | |
\sum | |
j=1 |
(t) | |
T | |
j,i |
[log\tauj-\tfrac{1}{2}log|\Sigmaj|-\tfrac{1}{2}(xi-\boldsymbol{\mu}
-1 | |
j |
(xi-\boldsymbol{\mu}j)-\tfrac{d}{2}log(2\pi)]. \end{align}
logL(\theta;xi,Zi)
P(Zi\midXi=xi;\theta(t))
xi
Tj,i
This full conditional expectation does not need to be calculated in one step, because τ and μ/Σ appear in separate linear terms and can thus be maximized independently.
Q(\theta\mid\theta(t))
\theta
\tau
(\boldsymbol{\mu}1,\Sigma1)
(\boldsymbol{\mu}2,\Sigma2)
To begin, consider
\tau
\tau1+\tau2=1
\begin{align}\boldsymbol{\tau}(t+1)&=\underset{\boldsymbol{\tau}}{\operatorname{argmax}} Q(\theta\mid\theta(t))\\ &=\underset{\boldsymbol{\tau}}{\operatorname{argmax}} \left\{\left[
n | |
\sum | |
i=1 |
(t) | |
T | |
1,i |
\right]log\tau1+\left[
n | |
\sum | |
i=1 |
(t) | |
T | |
2,i |
\right]log\tau2\right\}. \end{align}
(t+1) | |
\tau | |
j |
=
| |||||||||||||||||||||
|
=
1 | |
n |
n | |
\sum | |
i=1 |
(t) | |
T | |
j,i |
.
For the next estimates of
(\boldsymbol{\mu}1,\Sigma1)
(t+1) | |
\begin{align}(\boldsymbol{\mu} | |
1 |
(t+1) | |
,\Sigma | |
1 |
) &=\underset{\boldsymbol{\mu}1,\Sigma1}\operatorname{argmax} Q(\theta\mid\theta(t))\\ &=\underset{\boldsymbol{\mu}1,\Sigma1}
n | |
\operatorname{argmax} \sum | |
i=1 |
(t) | |
T | |
1,i |
\left\{-\tfrac{1}{2}log|\Sigma1|-\tfrac{1}{2}(xi-\boldsymbol{\mu}
-1 | |
1 |
(xi-\boldsymbol{\mu}1)\right\} \end{align}.
(t+1) | |
\boldsymbol{\mu} | |
1 |
=
| ||||||||||||||||
|
(t+1) | |
\Sigma | |
1 |
=
| ||||||||||||||||
1 |
(t+1))(xi-
(t+1) | |
\boldsymbol{\mu} | |
1 |
)\top
n | |
}{\sum | |
i=1 |
(t) | |
T | |
1,i |
(t+1) | |
\boldsymbol{\mu} | |
2 |
=
| ||||||||||||||||
|
(t+1) | |
\Sigma | |
2 |
=
| ||||||||||||||||
2 |
(t+1))(xi-
(t+1) | |
\boldsymbol{\mu} | |
2 |
)\top
n | |
}{\sum | |
i=1 |
(t) | |
T | |
2,i |
Conclude the iterative process if
E | |
Z\mid\theta(t),x |
[logL(\theta(t);x,Z)]\leq
E | |
Z\mid\theta(t-1),x |
[logL(\theta(t-1);x,Z)]+\varepsilon
\varepsilon
The algorithm illustrated above can be generalized for mixtures of more than two multivariate normal distributions.
The EM algorithm has been implemented in the case where an underlying linear regression model exists explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model.[37] Special cases of this model include censored or truncated observations from one normal distribution.[37]
EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termed moment-based approaches[38] or the so-called spectral techniques.[39] [40] Moment-based approaches to learning the parameters of a probabilistic model enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:1727-1736, 2020.