In information theory, the typical set is a set of sequences whose probability is close to two raised to the negative power of the entropy of their source distribution. That this set has total probability close to one is a consequence of the asymptotic equipartition property (AEP) which is a kind of law of large numbers. The notion of typicality is only concerned with the probability of a sequence and not the actual sequence itself.
This has great use in compression theory as it provides a theoretical means for compressing data, allowing us to represent any sequence Xn using nH(X) bits on average, and, hence, justifying the use of entropy as a measure of information from a source.
The AEP can also be proven for a large class of stationary ergodic processes, allowing typical set to be defined in more general cases.
If a sequence x1, ..., xn is drawn from an independent identically-distributed random variable (IID) X defined over a finite alphabet
l{X}
\inl{X}
2-n(\leqslantp(x1,x2,...,xn)\leqslant2-n(
where
H(X)=-\sumx
is the information entropy of X. The probability above need only be within a factor of 2n ε. Taking the logarithm on all sides and dividing by -n, this definition can be equivalently stated as
H(X)-\varepsilon\leq-
1 | |
n |
log2p(x1,x2,\ldots,xn)\leqH(X)+\varepsilon.
For i.i.d sequence, since
p(x1,x2,\ldots,xn)=
n | |
\prod | |
i=1 |
p(xi),
we further have
H(X)-\varepsilon\leq-
1 | |
n |
n | |
\sum | |
i=1 |
log2p(xi)\leqH(X)+\varepsilon.
By the law of large numbers, for sufficiently large n
- | 1 |
n |
n | |
\sum | |
i=1 |
log2p(xi) → H(X).
An essential characteristic of the typical set is that, if one draws a large number n of independent random samples from the distribution X, the resulting sequence (x1, x2, ..., xn) is very likely to be a member of the typical set, even though the typical set comprises only a small fraction of all the possible sequences. Formally, given any
\varepsilon>0
Pr[x(n)\in
(n) | |
A | |
\epsilon |
]\geq1-\varepsilon
\left|
(n) | |
{A | |
\varepsilon} |
\right|\leqslant2n(H(X)+\varepsilon)
\left|
(n) | |
{A | |
\varepsilon} |
\right|\geqslant(1-\varepsilon)2n(H(X)-\varepsilon)
l{X}
| ||||||||||
|l{X |
(n)|}\equiv
2nH(X) | ||||||
|
as n becomes very large, since
H(X)<log2|l{X}|,
|l{X}|
l{X}
For a general stochastic process with AEP, the (weakly) typical set can be defined similarly with p(x1, x2, ..., xn) replaced by p(x0τ) (i.e. the probability of the sample limited to the time interval [0, ''τ'']), n being the degree of freedom of the process in the time interval and H(X) being the entropy rate. If the process is continuous valued, differential entropy is used instead.
Counter-intuitively, the most likely sequence is often not a member of the typical set. For example, suppose that X is an i.i.d Bernoulli random variable with p(0)=0.1 and p(1)=0.9. In n independent trials, since p(1)>p(0), the most likely sequence of outcome is the sequence of all 1's, (1,1,...,1). Here the entropy of X is H(X)=0.469, while
- | 1 |
n |
log2p\left(x(n)=(1,1,\ldots,1)\right)=-
1 | |
n |
log2(0.9n)=0.152
So this sequence is not in the typical set because its average logarithmic probability cannot come arbitrarily close to the entropy of the random variable X no matter how large we take the value of n.
For Bernoulli random variables, the typical set consists of sequences with average numbers of 0s and 1s in n independent trials. This is easily demonstrated: If p(1) = p and p(0) = 1-p, then for n trials with m 1's, we have
- | 1 |
n |
log2p(x(n))=-
1 | |
n |
log2pm(1-p)n-m=-
m | |
n |
log2p-\left(
n-m | |
n |
\right)log2(1-p).
The average number of 1's in a sequence of Bernoulli trials is m = np. Thus, we have
- | 1 |
n |
log2p(x(n))=-plog2p-(1-p)log2(1-p)=H(X).
For this example, if n=10, then the typical set consist of all sequences that have a single 0 in the entire sequence. In case p(0)=p(1)=0.5, then every possible binary sequences belong to the typical set.
If a sequence x1, ..., xn is drawn from some specified joint distribution defined over a finite or an infinite alphabet
l{X}
\inl{X}
\left| | N(xi) |
n |
-p(xi)\right|<
\varepsilon | |
\|l{X |
\|}.
where
{N(xi)}
It can be shown that strongly typical sequences are also weakly typical (with a different constant ε), and hence the name. The two forms, however, are not equivalent. Strong typicality is often easier to work with in proving theorems for memoryless channels. However, as is apparent from the definition, this form of typicality is only defined for random variables having finite support.
Two sequences
xn
yn
(xn,yn)
p(xn,y
n | |
i=1 |
p(xi,yi)
xn
yn
p(xn)
p(yn)
(xn,yn)
n(X,Y) | |
A | |
\varepsilon |
Let
\tilde{X}n
\tilde{Y}n
p(xn)
p(yn)
P\left[(Xn,Yn)\in
n(X,Y) | |
A | |
\varepsilon |
\right]\geqslant1-\epsilon
\left|
n(X,Y) | |
A | |
\varepsilon |
\right|\leqslant2n
\left|
n(X,Y) | |
A | |
\varepsilon |
\right|\geqslant(1-\epsilon)2n
P\left[(\tilde{X}n,\tilde{Y}n)\in
n(X,Y) | |
A | |
\varepsilon |
\right]\leqslant2-n
P\left[(\tilde{X}n,\tilde{Y}n)\in
n(X,Y) | |
A | |
\varepsilon |
\right]\geqslant(1-\epsilon)2-n
See also: Shannon's source coding theorem. In information theory, typical set encoding encodes only the sequences in the typical set of a stochastic source with fixed length block codes. Since the size of the typical set is about 2nH(X), only nH(X) bits are required for the coding, while at the same time ensuring that the chances of encoding error is limited to ε. Asymptotically, it is, by the AEP, lossless and achieves the minimum rate equal to the entropy rate of the source.
In information theory, typical set decoding is used in conjunction with random coding to estimate the transmitted message as the one with a codeword that is jointly ε-typical with the observation. i.e.
\hat{w}=w\iff(\existsw)(
n)\in | |
(x | |
1 |
n(X,Y)) | |
A | |
\varepsilon |
n | |
\hat{w},x | |
1 |
w
n(X,Y) | |
A | |
\varepsilon |
n) | |
p(x | |
1 |
n) | |
p(y | |
1 |
n) | |
p(x | |
1 |
See also: algorithmic complexity theory.