Topological entropy explained

In mathematics, the topological entropy of a topological dynamical system is a nonnegative extended real number that is a measure of the complexity of the system. Topological entropy was first introduced in 1965 by Adler, Konheim and McAndrew. Their definition was modelled after the definition of the Kolmogorov–Sinai, or metric entropy. Later, Dinaburg and Rufus Bowen gave a different, weaker definition reminiscent of the Hausdorff dimension. The second definition clarified the meaning of the topological entropy: for a system given by an iterated function, the topological entropy represents the exponential growth rate of the number of distinguishable orbits of the iterates. An important variational principle relates the notions of topological and measure-theoretic entropy.

Definition

A topological dynamical system consists of a Hausdorff topological space X (usually assumed to be compact) and a continuous self-map f : X → X. Its topological entropy is a nonnegative extended real number that can be defined in various ways, which are known to be equivalent.

Definition of Adler, Konheim, and McAndrew

Let X be a compact Hausdorff topological space. For any finite open cover C of X, let H(C) be the logarithm (usually to base 2) of the smallest number of elements of C that cover X.[1] For two covers C and D, let

C\veeD

be their (minimal) common refinement, which consists of all the non-empty intersections of a set from C with a set from D, and similarly for multiple covers.

For any continuous map f: X → X, the following limit exists:

H(f,C)=\limn\toinfty

1
n

H(C\veef-1C\vee\ldots\veef-n+1C).

Then the topological entropy of f, denoted h(f), is defined to be the supremum of H(f,C) over all possible finite covers C of X.

Interpretation

The parts of C may be viewed as symbols that (partially) describe the position of a point x in X: all points xCi are assigned the symbol Ci . Imagine that the position of x is (imperfectly) measured by a certain device and that each part of C corresponds to one possible outcome of the measurement.

H(C\veef-1C\vee\ldots\veef-n+1C)

then represents the logarithm of the minimal number of "words" of length n needed to encode the points of X according to the behavior of their first n - 1 iterates under f, or, put differently, the total number of "scenarios" of the behavior of these iterates, as "seen" by the partition C. Thus the topological entropy is the average (per iteration) amount of information needed to describe long iterations of the map f.

Definition of Bowen and Dinaburg

This definition [2] [3] [4] uses a metric on X (actually, a uniform structure would suffice). This is a narrower definition than that of Adler, Konheim, and McAndrew,[5] as it requires the additional metric structure on the topological space (but is independent of the choice of metrics generating the given topology). However, in practice, the Bowen-Dinaburg topological entropy is usually much easier to calculate.

Let (X, d) be a compact metric space and f: X → X be a continuous map. For each natural number n, a new metric dn is defined on X by the formula

i(x),f
d
n(x,y)=max\{d(f

i(y)):0\leqi<n\}.

Given any &epsilon; > 0 and n ≥ 1, two points of X are &epsilon;-close with respect to this metric if their first n iterates are &epsilon;-close. This metric allows one to distinguish in a neighborhood of an orbit the points that move away from each other during the iteration from the points that travel together. A subset E of X is said to be (n, &epsilon;)-separated if each pair of distinct points of E is at least &epsilon; apart in the metric dn. Denote by N(n, &epsilon;) the maximum cardinality of an (n, &epsilon;)-separated set. The topological entropy of the map f is defined by

h(f)=\lim\epsilon\to\left(\limsupn\to

1
n

logN(n,\epsilon)\right).

Interpretation

Since X is compact, N(n, &epsilon;) is finite and represents the number of distinguishable orbit segments of length n, assuming that we cannot distinguish points within &epsilon; of one another. A straightforward argument shows that the limit defining h(f) always exists in the extended real line (but could be infinite). This limit may be interpreted as the measure of the average exponential growth of the number of distinguishable orbit segments. In this sense, it measures complexity of the topological dynamical system (X, f). Rufus Bowen extended this definition of topological entropy in a way which permits X to be non-compact under the assumption that the map f is uniformly continuous.

Properties

f

be an expansive homeomorphism of a compact metric space

X

and let

C

be a topological generator. Then the topological entropy of

f

relative to

C

is equal to the topological entropy of

f

, i.e.

h(f)=H(f,C).

f:XX

be a continuous transformation of a compact metric space

X

, let

h\mu(f)

be the measure-theoretic entropy of

f

with respect to

\mu

and let

M(X,f)

be the set of all

f

-invariant Borel probability measures on X. Then the variational principle for entropy[6] states that

h(f)=\sup\muh\mu(f)

.

h\mu

over the set

M(X,f)

is not attained, but if additionally the entropy map

\mu\mapstoh\mu(f):M(X,f)\R

is upper semicontinuous, then a measure of maximal entropy - meaning a measure

\mu

in

M(X,f)

with

h\mu(f)=h(f)

- exists.

f

has a unique measure of maximal entropy

\mu

, then

f

is ergodic with respect to

\mu

.

Examples

\sigma:\Sigmak\Sigmak

by

xn\mapstoxn-1

denote the full two-sided k-shift on symbols

\{1,...,k\}

. Let

C=\{[1],...,[k]\}

denote the partition of

\Sigmak

into cylinders of length 1. Then
n-1
vee
j=0

\sigma-(C)

is a partition of

\Sigmak

for all

n\in\N

and the number of sets is

kn

respectively. The partitions are open covers and

C

is a topological generator. Hence

h(\sigma)=H(\sigma,C)=\limn

1
n

logkn=logk

. The measure-theoretic entropy of the Bernoulli
\left(1
k,

...,

1
k

\right)

-measure is also

logk

. Hence it is a measure of maximal entropy. Further on it can be shown that no other measures of maximal entropy exist.

A

be an irreducible

k x k

matrix with entries in

\{0,1\}

and let

\sigma:\SigmaA\SigmaA

be the corresponding subshift of finite type. Then

h(\sigma)=logλ

where

λ

is the largest positive eigenvalue of

A

.

Notes

  1. Since X is compact, H(C) is always finite, even for an infinite cover C. The use of arbitrary covers yields the same value of entropy.
  2. Bowen. Rufus. Entropy for Group Endomorphisms and Homogeneous Spaces. Transactions of the American Mathematical Society. 153. 1971. 401–414. 0002-9947. 10.1090/S0002-9947-1971-0274707-X. free.
  3. Bowen. Rufus. Periodic Points and Measures for Axiom A Diffeomorphisms. Transactions of the American Mathematical Society. 154. 1971. 377–397. 0002-9947. 10.2307/1995452. 1995452.
  4. Dinaburg. Efim. RELATIONSHIP BETWEEN TOPOLOGICAL ENTROPY AND METRIC ENTROPY. Doklady Akademii Nauk SSSR. 1970. 170. 19.
  5. Adler. R. L.. Konheim. A. G.. McAndrew. M. H.. Topological Entropy. Transactions of the American Mathematical Society. 114. 2. 1965. 309. 0002-9947. 10.1090/S0002-9947-1965-0175106-9. free.
  6. Goodman. T. N. T.. 1971. Relating Topological Entropy and Measure Entropy. Bulletin of the London Mathematical Society. en. 3. 2. 176–180. 10.1112/blms/3.2.176. 1469-2120. subscription.

See also

References

External links