Markov chain Monte Carlo


In statistics, Markov chain Monte Carlo is a class of algorithms used to draw samples from a probability distribution. Given a probability distribution, one can construct a Markov chain whose elements' distribution approximates it – that is, the Markov chain's equilibrium distribution matches the target distribution. The more steps that are included, the more closely the distribution of the sample matches the actual desired distribution.
Markov chain Monte Carlo methods are used to study probability distributions that are too complex or too high dimensional to study with analytic techniques alone. Various algorithms exist for constructing such Markov chains, including the Metropolis–Hastings algorithm.

General explanation

Markov chain Monte Carlo methods create samples from a continuous random variable, with probability density proportional to a known function. These samples can be used to evaluate an integral over that variable, as its expected value or variance.
Practically, an ensemble of chains is generally developed, starting from a set of points arbitrarily chosen and sufficiently distant from each other. These chains are stochastic processes of "walkers" which move around randomly according to an algorithm that looks for places with a reasonably high contribution to the integral to move into next, assigning them higher probabilities.
Random walk Monte Carlo methods are a kind of random simulation or Monte Carlo method. However, whereas the random samples of the integrand used in a conventional Monte Carlo integration are statistically independent, those used in MCMC are autocorrelated. Correlations of samples introduces the need to use the Markov chain central limit theorem when estimating the error of mean values.
These algorithms create Markov chains such that they have an equilibrium distribution which is proportional to the function given.

History

The development of MCMC methods is deeply rooted in the early exploration of Monte Carlo techniques in the mid-20th century, particularly in physics. These developments were marked by the Metropolis algorithm proposed by Nicholas Metropolis, Arianna W. Rosenbluth, Marshall Rosenbluth, Augusta H. Teller, and Edward Teller in 1953, which was designed to tackle high-dimensional integration problems using early computers. Then in 1970, W. K. Hastings generalized this algorithm and inadvertently introduced the component-wise updating idea, later known as Gibbs sampling. Simultaneously, the theoretical foundations for Gibbs sampling were being developed, such as the Hammersley–Clifford theorem from Julian Besag's 1974 paper. Although the seeds of MCMC were sown earlier, including the formal naming of Gibbs sampling in image processing by Stuart Geman and Donald Geman and the data augmentation method by Martin A. Tanner and Wing Hung Wong, its "revolution" in mainstream statistics largely followed demonstrations of the universality and ease of implementation of sampling methods for complex statistical problems, spurred by increasing computational power and software like BUGS. This transformation was accompanied by significant theoretical advancements, such as Luke Tierney's rigorous treatment of MCMC convergence, and Jun S. Liu, Wong, and Augustine Kong's analysis of Gibbs sampler structure. Subsequent developments further expanded the MCMC toolkit, including particle filters for sequential problems, Perfect sampling aiming for exact simulation, RJMCMC for handling variable-dimension models, and deeper investigations into convergence diagnostics and the central limit theorem. Overall, the evolution of MCMC represents a paradigm shift in statistical computation, enabling the analysis of numerous previously intractable complex models and continually expanding the scope and impact of statistics.

Mathematical setting

Suppose is a Markov Chain in the general state space with specific properties. We are interested in the limiting behavior of the partial sums:
as n goes to infinity. Particularly, we hope to establish the Law of Large Numbers and the Central Limit Theorem for MCMC. In the following, we state some definitions and theorems necessary for the important convergence results. In short, we need the existence of invariant measure and Harris recurrent to establish the Law of Large Numbers of MCMC. And we need aperiodicity, irreducibility and extra conditions such as reversibility to ensure the Central Limit Theorem holds in MCMC.

Irreducibility and aperiodicity

Recall that in the discrete setting, a Markov chain is said to be irreducible if it is possible to reach any state from any other state in a finite number of steps with positive probability. However, in the continuous setting, point-to-point transitions have zero probability. In this case, φ-irreducibility generalizes irreducibility by using a reference measure φ on the measurable space.
;Definition
Given a measure defined on, the Markov chain with transition kernel is φ-irreducible if, for every with, there exists such that for all .
This is a more general definition for irreducibility of a Markov chain in non-discrete state space. In the discrete case, an irreducible Markov chain is said to be aperiodic if it has period 1. Formally, the period of a state is defined as:
For the general case, we define aperiodicity in terms of small sets:
;Definition
A φ-irreducible Markov chain has a cycle of length d if there exists a small set, an associated integer, and a probability distribution such that d is the greatest common divisor of:
A set is called small if there exists and a nonzero measure such that:

Harris recurrent

;Definition
A set is Harris recurrent if for all, where is the number of visits of the chain to the set.
The chain is said to be Harris recurrent if there exists a measure such that the chain is -irreducible and every measurable set with is Harris recurrent.
A useful criterion for verifying Harris recurrence is the following:
;Proposition
If for every, we have for every, then for all, and the chain is Harris recurrent.
This definition is only needed when the state space is uncountable. In the countable case, recurrence corresponds to, which is equivalent to for all.
;Definition
A -finite measure is said to be invariant for the transition kernel if:
When there exists an invariant probability measure for a ψ-irreducible chain, the chain is said to be positive recurrent.
Recurrent chains that do not allow for a finite invariant measure are called null recurrent.
In applications of Markov Chain Monte Carlo, a very useful criterion for Harris recurrence involves the use of bounded harmonic functions.
;Definition
A measurable function is said to be harmonic for the chain if:
These functions are invariant under the transition kernel in the functional sense, and they help characterize Harris recurrence.
;Proposition
''For a positive Markov chain, if the only bounded harmonic functions are the constant functions, then the chain is Harris recurrent.''

Law of Large Numbers for MCMC

;Theorem
If has a -finite invariant measure, then the following two statements are equivalent:
  1. The Markov chain is Harris recurrent.
  2. If with, then
This theorem provides a fundamental justification for the use of Markov Chain Monte Carlo methods, and it serves as the counterpart of the Law of Large Numbers in classical Monte Carlo.
An important aspect of this result is that does not need to be a probability measure. Therefore, there can be some type of strong stability even if the chain is null recurrent. Moreover, the Markov chain can be started from arbitrary state.
If is a probability measure, we can let and get
This is the Ergodic Theorem that we are more familiar with.

Central Limit Theorem for MCMC

There are several conditions under which the Central Limit Theorem holds for Markov chain Monte Carlo methods. One of the most commonly used is the condition of reversibility.
;Definition
A stationary Markov chain is said to be reversible if the distribution of given is the same as the distribution of given.
This is equivalent to the detailed balance condition, which is defined as follows:
;Definition
A Markov chain with transition kernel satisfies the detailed balance condition if there exists a function such that:
for every pair in the state space.
;Theorem
If is aperiodic, irreducible, and reversible with invariant distribution, then:
where
and
Even though reversibility is a restrictive assumption in theory, it is often easily satisfied in practical MCMC algorithms by introducing auxiliary variables or using symmetric proposal mechanisms. There are many other conditions that can be used to establish CLT for MCMC such as geometric ergodicity and the discrete state space.

Autocorrelation

MCMC methods produce autocorrelated samples, in contrast to standard Monte Carlo techniques that draw independent samples. Autocorrelation means successive draws from the Markov chain are statistically dependent, so each new sample adds less fresh information than an independent draw would. As a result, one must account for this correlation when assessing the accuracy of estimates from the chain. In particular, positive autocorrelation in the chain increases the variance of estimators and slows the convergence of sample averages toward the true expectation.

Autocorrelation and efficiency

The effect of correlation on estimation can be quantified through the Markov chain central limit theorem. For a chain targeting a distribution with variance, the variance of the sample mean after steps is approximately, where is an effective sample size smaller than. Equivalently, one can express this as:
where is the sample mean and is the autocorrelation of the chain at lag, defined as. The term in parentheses,, is often called the integrated autocorrelation. When the chain has no autocorrelation, this factor equals 1, and one recovers the usual variance for independent samples. If the chain's samples are highly correlated, the sum of autocorrelations is large, leading to a much bigger variance for than in the independent case.