Markov chain Monte Carlo explained

In statistics, Markov chain Monte Carlo (MCMC) is a class of algorithms used to draw samples from a probability distribution. Given a probability distribution, one can construct a Markov chain whose elements' distribution approximates it – that is, the Markov chain's equilibrium distribution matches the target distribution. The more steps that are included, the more closely the distribution of the sample matches the actual desired distribution.

Markov chain Monte Carlo methods are used to study probability distributions that are too complex or too highly dimensional to study with analytic techniques alone. Various algorithms exist for constructing such Markov chains, including the Metropolis–Hastings algorithm.

Applications

MCMC methods are primarily used for calculating numerical approximations of multi-dimensional integrals, for example in Bayesian statistics, computational physics,[1] computational biology[2] and computational linguistics.[3] [4]

In Bayesian statistics, Markov chain Monte Carlo methods are typically used to calculate moments and credible intervals of posterior probability distributions. The use of MCMC methods makes it possible to compute large hierarchical models that require integrations over hundreds to thousands of unknown parameters.[5]

In rare event sampling, they are also used for generating samples that gradually populate the rare failure region.

General explanation

Markov chain Monte Carlo methods create samples from a continuous random variable, with probability density proportional to a known function. These samples can be used to evaluate an integral over that variable, as its expected value or variance.

Practically, an ensemble of chains is generally developed, starting from a set of points arbitrarily chosen and sufficiently distant from each other. These chains are stochastic processes of "walkers" which move around randomly according to an algorithm that looks for places with a reasonably high contribution to the integral to move into next, assigning them higher probabilities.

Random walk Monte Carlo methods are a kind of random simulation or Monte Carlo method. However, whereas the random samples of the integrand used in a conventional Monte Carlo integration are statistically independent, those used in MCMC are autocorrelated. Correlations of samples introduces the need to use the Markov chain central limit theorem when estimating the error of mean values.

These algorithms create Markov chains such that they have an equilibrium distribution which is proportional to the function given.

Reducing correlation

While MCMC methods were created to address multi-dimensional problems better than generic Monte Carlo algorithms, when the number of dimensions rises they too tend to suffer the curse of dimensionality: regions of higher probability tend to stretch and get lost in an increasing volume of space that contributes little to the integral. One way to address this problem could be shortening the steps of the walker, so that it does not continuously try to exit the highest probability region, though this way the process would be highly autocorrelated and expensive (i.e. many steps would be required for an accurate result). More sophisticated methods such as Hamiltonian Monte Carlo and the Wang and Landau algorithm use various ways of reducing this autocorrelation, while managing to keep the process in the regions that give a higher contribution to the integral. These algorithms usually rely on a more complicated theory and are harder to implement, but they usually converge faster.

Examples

Random walk

When target distribution is multi-dimensional, Gibbs sampling algorithm[6] updates each coordinate from its full conditional distribution given other coordinates. Gibbs sampling can be viewed as a special case of Metropolis–Hastings algorithm with acceptance rate uniformly equal to 1. When drawing from the full conditional distributions is not straightforward other samplers-within-Gibbs are used (e.g., see [7] [8]). Gibbs sampling is popular partly because it does not require any 'tuning'. Algorithm structure of the Gibbs sampling highly resembles that of the coordinate ascent variational inference in that both algorithms utilize the full-conditional distributions in the updating procedure.[9]

This method replaces the evaluation of the density of the target distribution with an unbiased estimate and is useful when the target density is not available analytically, e.g. latent variable models.

Interacting particle methods

Interacting MCMC methodologies are a class of mean-field particle methods for obtaining random samples from a sequence of probability distributions with an increasing level of sampling complexity.[12] These probabilistic models include path space state models with increasing time horizon, posterior distributions w.r.t. sequence of partial observations, increasing constraint level sets for conditional distributions, decreasing temperature schedules associated with some Boltzmann–Gibbs distributions, and many others. In principle, any Markov chain Monte Carlo sampler can be turned into an interacting Markov chain Monte Carlo sampler. These interacting Markov chain Monte Carlo samplers can be interpreted as a way to run in parallel a sequence of Markov chain Monte Carlo samplers. For instance, interacting simulated annealing algorithms are based on independent Metropolis–Hastings moves interacting sequentially with a selection-resampling type mechanism. In contrast to traditional Markov chain Monte Carlo methods, the precision parameter of this class of interacting Markov chain Monte Carlo samplers is only related to the number of interacting Markov chain Monte Carlo samplers. These advanced particle methodologies belong to the class of Feynman–Kac particle models,[13] [14] also called Sequential Monte Carlo or particle filter methods in Bayesian inference and signal processing communities.[15] Interacting Markov chain Monte Carlo methods can also be interpreted as a mutation-selection genetic particle algorithm with Markov chain Monte Carlo mutations.

Quasi-Monte Carlo

The quasi-Monte Carlo method is an analog to the normal Monte Carlo method that uses low-discrepancy sequences instead of random numbers.[16] [17] It yields an integration error that decays faster than that of true random sampling, as quantified by the Koksma–Hlawka inequality. Empirically it allows the reduction of both estimation error and convergence time by an order of magnitude. Markov chain quasi-Monte Carlo methods[18] [19] such as the Array–RQMC method combine randomized quasi–Monte Carlo and Markov chain simulation by simulating

n

chains simultaneously in a way that better approximates the true distribution of the chain than with ordinary MCMC.[20] In empirical experiments, the variance of the average of a function of the state sometimes converges at rate

O(n-2)

or even faster, instead of the

O(n-1)

Monte Carlo rate.[21]

Convergence

Usually it is not hard to construct a Markov chain with the desired properties. The more difficult problem is to determine how many steps are needed to converge to the stationary distribution within an acceptable error.[22] A good chain will have rapid mixing: the stationary distribution is reached quickly starting from an arbitrary position. A standard empirical method to assess convergence is to run several independent simulated Markov chains and check that the ratio of inter-chain to intra-chain variances for all the parameters sampled is close to 1.[23]

Typically, Markov chain Monte Carlo sampling can only approximate the target distribution, as there is always some residual effect of the starting position. More sophisticated Markov chain Monte Carlo-based algorithms such as coupling from the past can produce exact samples, at the cost of additional computation and an unbounded (though finite in expectation) running time.

Many random walk Monte Carlo methods move around the equilibrium distribution in relatively small steps, with no tendency for the steps to proceed in the same direction. These methods are easy to implement and analyze, but unfortunately it can take a long time for the walker to explore all of the space. The walker will often double back and cover ground already covered.

Further consideration of convergence is at Markov chain central limit theorem. See [24] for a discussion of the theory related to convergence and stationarity of the Metropolis–Hastings algorithm.

Software

Several software programs provide MCMC sampling capabilities, for example:

See also

References

Sources

Further reading

Notes and References

  1. Kasim. M.F.. Bott. A.F.A.. Tzeferacos. P.. Lamb. D.Q.. Gregori. G.. Vinko. S.M. . September 2019 . Retrieving fields from proton radiography without source profiles . Physical Review E. 100. 3. 033208. 10.1103/PhysRevE.100.033208. 31639953. 1905.12934. 2019PhRvE.100c3208K. 170078861.
  2. Gupta. Ankur. Rawlings. James B. . April 2014 . Comparison of Parameter Estimation Methods in Stochastic Chemical Kinetic Models: Examples in Systems Biology . AIChE Journal. 60. 4. 1253–1268. 10.1002/aic.14409 . 4946376. 27429455.
  3. See Gill 2008.
  4. See Robert & Casella 2004.
  5. Book: Banerjee. Sudipto. Carlin. Bradley P.. Gelfand. Alan P.. Hierarchical Modeling and Analysis for Spatial Data. CRC Press. 978-1-4398-1917-3. xix. Second. 2014-09-12.
  6. Geman . Stuart . Geman . Donald . November 1984 . Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images . IEEE Transactions on Pattern Analysis and Machine Intelligence . PAMI-6 . 6 . 721–741 . 10.1109/TPAMI.1984.4767596 . 22499653 . 5837272 . 0162-8828.
  7. Adaptive Rejection Sampling for Gibbs Sampling. Journal of the Royal Statistical Society. Series C (Applied Statistics). 1992-01-01. 337–348. 41. 2. 10.2307/2347565. W. R.. Gilks. P.. Wild. 2347565.
  8. Adaptive Rejection Metropolis Sampling within Gibbs Sampling. Journal of the Royal Statistical Society. Series C (Applied Statistics). 1995-01-01. 455–472. 44. 4. 10.2307/2986138. W. R.. Gilks. N. G.. Best. Nicky Best . K. K. C.. Tan. 2986138.
  9. Lee. Se Yoon. Gibbs sampler and coordinate ascent variational inference: A set-theoretical review. Communications in Statistics - Theory and Methods. 2021. 51 . 6 . 1–21. 10.1080/03610926.2021.1921214. 2008.01006. 220935477.
  10. See Stramer 1999.
  11. See Green 1995.
  12. Book: Del Moral, Pierre. Mean field simulation for Monte Carlo integration. 2013. Chapman & Hall/CRC Press . 626.
  13. Book: Del Moral, Pierre. Feynman–Kac formulae. Genealogical and interacting particle approximations. 2004. Springer . 575.
  14. Book: Del Moral. Pierre. Miclo. Laurent. Branching and Interacting Particle Systems Approximations of Feynman-Kac Formulae with Applications to Non-Linear Filtering. Séminaire de Probabilités XXXIV . Jacques Azéma . Michel Ledoux . Michel Émery . Marc Yor. Lecture Notes in Mathematics. 2000. 1729. 1–145. 10.1007/bfb0103798. 978-3-540-67314-9.
  15. Sequential Monte Carlo samplers . 10.1111/j.1467-9868.2006.00553.x. 68. 3. 2006. Journal of the Royal Statistical Society. Series B (Statistical Methodology). 411–436 . Del Moral . Pierre. cond-mat/0212648. 12074789.
  16. Papageorgiou . Anargyros . J. F. . Traub . Beating Monte Carlo . Risk . 9 . 6 . 1996 . 63–65 .
  17. Sobol . Ilya M . 1998 . On quasi-monte carlo integrations . Mathematics and Computers in Simulation . 47 . 2. 103–112 . 10.1016/s0378-4754(98)00096-2 .
  18. Chen . S. . Josef . Dick . Art B. . Owen . Consistency of Markov chain quasi-Monte Carlo on continuous state spaces . . 39 . 2 . 2011 . 673–701 . 10.1214/10-AOS831 . 1105.1896 . free .
  19. Tribble . Seth D. . Markov chain Monte Carlo algorithms using completely uniformly distributed driving sequences . Diss. . Stanford University . 2007 . .
  20. L'Ecuyer . P. . C. . Lécot . B. . Tuffin . A Randomized Quasi-Monte Carlo Simulation Method for Markov Chains . . 56 . 4 . 2008 . 958–975 . 10.1287/opre.1080.0556 .
  21. L'Ecuyer . P. . D. . Munger . C. . Lécot . B. . Tuffin . Sorting Methods and Convergence Rates for Array-RQMC: Some Empirical Comparisons . Mathematics and Computers in Simulation . 143 . 2018 . 191–201 . 10.1016/j.matcom.2016.07.010 .
  22. Gelman. A.. Rubin. D.B.. Inference from iterative simulation using multiple sequences (with discussion). Statistical Science. 1992. 7. 4. 457–511. 10.1214/ss/1177011136. 1992StaSc...7..457G. free.
  23. Cowles. M.K.. Carlin. B.P.. Markov chain Monte Carlo convergence diagnostics: a comparative review. Journal of the American Statistical Association. 1996. 91. 434. 883–904. 10.1080/01621459.1996.10476956. 10.1.1.53.3445.
  24. Hill . S. D. . Spall . J. C. . 2019 . Stationarity and Convergence of the Metropolis-Hastings Algorithm: Insights into Theoretical Aspects . IEEE Control Systems Magazine . 39 . 1 . 56–67 . 10.1109/MCS.2018.2876959 . 58672766 .
  25. Foreman-Mackey . Daniel . Hogg . David W. . Lang . Dustin . Goodman . Jonathan . 2013-11-25 . emcee: The MCMC Hammer . Publications of the Astronomical Society of the Pacific . 125 . 925 . 306–312 . 10.1086/670067. 1202.3665 . 2013PASP..125..306F . 88518555 .