Markov kernel explained
In probability theory, a Markov kernel (also known as a stochastic kernel or probability kernel) is a map that in the general theory of Markov processes plays the role that the transition matrix does in the theory of Markov processes with a finite state space.[1]
Formal definition
Let
and
be
measurable spaces. A
Markov kernel with source
and target
, sometimes written as
\kappa:(X,l{A})\to(Y,l{B})
, is a function
with the following properties:
- For every (fixed)
- For every (fixed)
, the map
is a
probability measure on
In other words it associates to each point
a
probability measure \kappa(dy|x):B\mapsto\kappa(B,x)
on
such that, for every measurable set
, the map
is measurable with respect to the
-algebra
.
[2] Examples
Simple random walk on the integers
Take
, and
(the
power set of
). Then a Markov kernel is fully determined by the probability it assigns to singletons
for each
:
\kappa(B|n)=\summ\kappa(\{m\}|n), \foralln\inZ,\forallB\inlB
.Now the random walk
that goes to the right with probability
and to the left with probability
is defined by
\kappa(\{m\}|n)=p\deltam,+(1-p)\deltam,, \foralln,m\in\Z
where
is the
Kronecker delta. The transition probabilities
for the random walk are equivalent to the Markov kernel.
General Markov processes with countable state space
More generally take
and
both countable and
. Again a Markov kernel is defined by the probability it assigns to singleton sets for each
\kappa(B|i)=\sumj\kappa(\{j\}|i), \foralli\inX,\forallB\inlB
,We define a Markov process by defining a transition probability
where the numbers
define a (countable)
stochastic matrix
i.e.
\begin{align}
Kji&\ge0, &\forall(j,i)\inY x X,\\
\sumjKji&=1, &\foralli\inX.\\
\end{align}
We then define
\kappa(\{j\}|i)=Kji=P(j|i), \foralli\inX, \forallB\inlB
.Again the transition probability, the stochastic matrix and the Markov kernel are equivalent reformulations.
Markov kernel defined by a kernel function and a measure
Let
be a measure on
, and
a
measurable function with respect to the product
-algebra
such that
\intYk(y,x)\nu(dy)=1, \forallx\inX
,then
\kappa(dy|x)=k(y,x)\nu(dy)
i.e. the mapping
\begin{cases}\kappa:lB x X\to[0,1]\ \kappa(B|x)=\intBk(y,x)\nu(dy)\end{cases}
defines a Markov kernel.
[3] This example generalises the countable Markov process example where
was the
counting measure. Moreover it encompasses other important examples such as the convolution kernels, in particular the Markov kernels defined by the heat equation. The latter example includes the
Gaussian kernel on
with
standard Lebesgue measure and
.
Measurable functions
Take
and
arbitrary measurable spaces, and let
be a measurable function. Now define
\kappa(dy|x)=\deltaf(x)(dy)
i.e.
\kappa(B|x)=1B(f(x))=
(x)=\begin{cases}1&iff(x)\inB\ 0&otherwise\end{cases}
for all
. Note that the indicator function
is
-measurable for all
iff
is measurable.
This example allows us to think of a Markov kernel as a generalised function with a (in general) random rather than certain value. That is, it is a multivalued function where the values are not equally weighted.
As a less obvious example, take
, and
the real numbers
with the standard sigma algebra of
Borel sets. Then
\kappa(B|n)=\begin{cases}1B(0)&n=0\ \Pr(\xi1+ … +\xix\inB)&n ≠ 0\ \end{cases}
where
is the number of element at the state
,
are
i.i.d. random variables (usually with mean 0) and where
is the indicator function. For the simple case of
coin flips this models the different levels of a
Galton board.
Composition of Markov Kernels
Given measurable spaces
,
we consider a Markov kernel
as a morphism
. Intuitively, rather than assigning to each
a sharply defined point
the kernel assigns a "fuzzy" point in
which is only known with some level of uncertainty, much like actual physical measurements. If we have a third measurable space
, and probability kernels
and
, we can define a composition
by the
Chapman-Kolmogorov equation(λ\circ\kappa)(dz|x)=\intYλ(dz|y)\kappa(dy|x)
.The composition is associative by the Monotone Convergence Theorem and the identity function considered as a Markov kernel (i.e. the delta measure
\kappa1(dx'|x)=\deltax(dx')
) is the unit for this composition.
This composition defines the structure of a category on the measurable spaces with Markov kernels as morphisms, first defined by Lawvere,[4] the category of Markov kernels.
Probability Space defined by Probability Distribution and a Markov Kernel
A composition of a probability space
and a probability kernel
defines a probability space
, where the probability measure is given by
PY(B)=\intX\intB\kappa(dy|x)PX(dx)=\intX\kappa(B|x)PX(dx)=
\kappa(B| ⋅ ).
Properties
Semidirect product
Let
be a probability space and
a Markov kernel from
to some
. Then there exists a unique measure
on
, such that:
Q(A x B)=\intA\kappa(B|x)P(dx), \forallA\inlA, \forallB\inlB.
Regular conditional distribution
Let
be a Borel space,
a
-valued random variable on the measure space
and
a sub-
-algebra. Then there exists a Markov kernel
from
to
, such that
is a version of the
conditional expectation
} \mid \mathcal G] for every
, i.e.
P(X\inB\midlG)=E\left[1\{X
}\mid\mathcal G \right ] = \kappa(\cdot,B), \qquad P\text\,\, \forall B \in \mathcal G.
It is called regular conditional distribution of
given
and is not uniquely defined.
Generalizations
Transition kernels generalize Markov kernels in the sense that for all
, the map
can be any type of (non negative) measure, not necessarily a probability measure.
External links
References
§36. Kernels and semigroups of kernels
See also
Notes and References
- Book: Reiss . R. D. . A Course on Point Processes . 10.1007/978-1-4613-9308-5 . Springer Series in Statistics . 1993 . 978-1-4613-9310-8 .
- Book: Klenke . Achim . Probability Theory: A Comprehensive Course. Universitext . 2014 . Springer. 180. 2. 10.1007/978-1-4471-5361-0. 978-1-4471-5360-3 .
- Book: Erhan. Cinlar. Probability and Stochastics. 2011. Springer. New York. 978-0-387-87858-4. 37–38.
- Web site: F. W. Lawvere. The Category of Probabilistic Mappings. 1962.