In mathematics, an iterated function is a function that is obtained by composing another function with itself two or several times. The process of repeatedly applying the same function is called iteration. In this process, starting from some initial object, the result of applying a given function is fed again into the function as input, and this process is repeated.
For example, on the image on the right:
Iterated functions are studied in computer science, fractals, dynamical systems, mathematics and renormalization group physics.
The formal definition of an iterated function on a set X follows.
Let be a set and be a function.
Defining as the n-th iterate of, where n is a non-negative integer, by:and
where is the identity function on and denotes function composition. This notation has been traced to and John Frederick William Herschel in 1813. Herschel credited Hans Heinrich Bürmann for it, but without giving a specific reference to the work of Bürmann, which remains undiscovered.[1]
Because the notation may refer to both iteration (composition) of the function or exponentiation of the function (the latter is commonly used in trigonometry), some mathematicians choose to use to denote the compositional meaning, writing for the -th iterate of the function, as in, for example, meaning . For the same purpose, was used by Benjamin Peirce[2] whereas Alfred Pringsheim and Jules Molk suggested instead.
In general, the following identity holds for all non-negative integers and,
fm\circfn=fn\circfm=fm+n~.
This is structurally identical to the property of exponentiation that .
In general, for arbitrary general (negative, non-integer, etc.) indices and, this relation is called the translation functional equation, cf. Schröder's equation and Abel equation. On a logarithmic scale, this reduces to the nesting property of Chebyshev polynomials,, since .
The relation also holds, analogous to the property of exponentiation that .
The sequence of functions is called a Picard sequence,[3] [4] named after Charles Émile Picard.
For a given in, the sequence of values is called the orbit of .
If for some integer, the orbit is called a periodic orbit. The smallest such value of for a given is called the period of the orbit. The point itself is called a periodic point. The cycle detection problem in computer science is the algorithmic problem of finding the first periodic point in an orbit, and the period of the orbit.
If for some in (that is, the period of the orbit of is), then is called a fixed point of the iterated sequence. The set of fixed points is often denoted as . There exist a number of fixed-point theorems that guarantee the existence of fixed points in various situations, including the Banach fixed point theorem and the Brouwer fixed point theorem.
There are several techniques for convergence acceleration of the sequences produced by fixed point iteration.[5] For example, the Aitken method applied to an iterated fixed point is known as Steffensen's method, and produces quadratic convergence.
Upon iteration, one may find that there are sets that shrink and converge towards a single point. In such a case, the point that is converged to is known as an attractive fixed point. Conversely, iteration may give the appearance of points diverging away from a single point; this would be the case for an unstable fixed point.[6]
When the points of the orbit converge to one or more limits, the set of accumulation points of the orbit is known as the limit set or the ω-limit set.
The ideas of attraction and repulsion generalize similarly; one may categorize iterates into stable sets and unstable sets, according to the behavior of small neighborhoods under iteration. Also see infinite compositions of analytic functions.
Other limiting behaviors are possible; for example, wandering points are points that move away, and never come back even close to where they started.
If one considers the evolution of a density distribution, rather than that of individual point dynamics, then the limiting behavior is given by the invariant measure. It can be visualized as the behavior of a point-cloud or dust-cloud under repeated iteration. The invariant measure is an eigenstate of the Ruelle-Frobenius-Perron operator or transfer operator, corresponding to an eigenvalue of 1. Smaller eigenvalues correspond to unstable, decaying states.
In general, because repeated iteration corresponds to a shift, the transfer operator, and its adjoint, the Koopman operator can both be interpreted as shift operators action on a shift space. The theory of subshifts of finite type provides general insight into many iterated functions, especially those leading to chaos.
The notion must be used with care when the equation has multiple solutions, which is normally the case, as in Babbage's equation of the functional roots of the identity map. For example, for and, both and are solutions; so the expression does not denote a unique function, just as numbers have multiple algebraic roots. A trivial root of f can always be obtained if fs domain can be extended sufficiently, cf. picture. The roots chosen are normally the ones belonging to the orbit under study.
Fractional iteration of a function can be defined: for instance, a half iterate of a function is a function such that .[7] This function can be written using the index notation as . Similarly, is the function defined such that, while may be defined as equal to, and so forth, all based on the principle, mentioned earlier, that . This idea can be generalized so that the iteration count becomes a continuous parameter, a sort of continuous "time" of a continuous orbit.[8] [9]
In such cases, one refers to the system as a flow (cf. section on conjugacy below.)
If a function is bijective (and so possesses an inverse function), then negative iterates correspond to function inverses and their compositions. For example, is the normal inverse of, while is the inverse composed with itself, i.e. . Fractional negative iterates are defined analogously to fractional positive ones; for example, is defined such that, or, equivalently, such that .
One of several methods of finding a series formula for fractional iteration, making use of a fixed point, is as follows.[10]
f^n(x) = f^n(a) + (x-a)\left.\fracf^n(x)\right|_ + \frac2\left.\fracf^n(x)\right|_ +\cdots
f^n(x) = f^n(a) + (x-a) f'(a)f'(f(a))f'(f^2(a))\cdots f'(f^(a)) + \cdots
f^n(x) = a + (x-a) f'(a)^n + \frac2(f(a)f'(a)^)\left(1+f'(a)+\cdots+f'(a)^ \right)+\cdots
f^n(x) = a + (x-a) f'(a)^n + \frac2(f(a)f'(a)^)\frac+\cdots There is a special case when, (a))+ \frac6\left(\fracn(n-1) f(a)^2 + n f(a)\right)+\cdotsThis can be carried on indefinitely, although inefficiently, as the latter terms become increasingly complicated. A more systematic procedure is outlined in the following section on Conjugacy.
For example, setting gives the fixed point, so the above formula terminates to justwhich is trivial to check.
Find the value of
\sqrt{2}\sqrt{2 …
So set and expanded around the fixed point value of 2 is then an infinite series,which, taking just the first three terms, is correct to the first decimal place when n is positive. Also see Tetration: . Using the other fixed point causes the series to diverge.
For, the series computes the inverse function .
With the function, expand around the fixed point 1 to get the serieswhich is simply the Taylor series of x(bn) expanded around 1.
If and are two iterated functions, and there exists a homeomorphism such that, then and are said to be topologically conjugate.
Clearly, topological conjugacy is preserved under iteration, as . Thus, if one can solve for one iterated function system, one also has solutions for all topologically conjugate systems. For example, the tent map is topologically conjugate to the logistic map. As a special case, taking, one has the iteration of as
, for any function .
Making the substitution yields
, a form known as the Abel equation.
Even in the absence of a strict homeomorphism, near a fixed point, here taken to be at = 0, (0) = 0, one may often solve[11] Schröder's equation for a function Ψ, which makes locally conjugate to a mere dilation,, that is
.
Thus, its iteration orbit, or flow, under suitable provisions (e.g.,), amounts to the conjugate of the orbit of the monomial,
, where in this expression serves as a plain exponent: functional iteration has been reduced to multiplication! Here, however, the exponent no longer needs be integer or positive, and is a continuous "time" of evolution for the full orbit:[12] the monoid of the Picard sequence (cf. transformation semigroup) has generalized to a full continuous group.[13]
This method (perturbative determination of the principal eigenfunction Ψ, cf. Carleman matrix) is equivalent to the algorithm of the preceding section, albeit, in practice, more powerful and systematic.
If the function is linear and can be described by a stochastic matrix, that is, a matrix whose rows or columns sum to one, then the iterated system is known as a Markov chain.
There are many chaotic maps. Well-known iterated functions include the Mandelbrot set and iterated function systems.
Ernst Schröder,[14] in 1870, worked out special cases of the logistic map, such as the chaotic case, so that, hence .
A nonchaotic case Schröder also illustrated with his method,, yielded, and hence .
If is the action of a group element on a set, then the iterated function corresponds to a free group.
Most functions do not have explicit general closed-form expressions for the n-th iterate. The table below lists some[14] that do. Note that all these expressions are valid even for non-integer and negative n, as well as non-negative integer n.
f(x) | fn(x) | |||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
x+b | x+nb | |||||||||||||||||||
ax+b (a\ne1) |
b | |||||||||||||||||||
axb (b\ne1) |
| |||||||||||||||||||
ax2+bx+
|
where: \alpha=
| |||||||||||||||||||
ax2+bx+
|
where: \alpha=
| |||||||||||||||||||
|
+
\left[
\right] where: \alpha=
\beta=
| |||||||||||||||||||
g-1(hl(g(x)r)) | g-1l(hnl(g(x)r)r) | |||||||||||||||||||
g-1l(g(x)+br) | g-1l(g(x)+nbr) | |||||||||||||||||||
\sqrt{x2+b} | \sqrt{x2+bn} | |||||||||||||||||||
g-1l(a g(x)+br) (a\ne1\veeb=0) | g-1
br) | |||||||||||||||||||
\sqrt{ax2+b} | \sqrt{anx2+
b} | |||||||||||||||||||
Tm(x)=\cos(m\arccosx) | Tmn=\cos(mn\arccosx) |
Some of these examples are related among themselves by simple conjugacies.
Iterated functions can be studied with the Artin–Mazur zeta function and with transfer operators.
In computer science, iterated functions occur as a special case of recursive functions, which in turn anchor the study of such broad topics as lambda calculus, or narrower ones, such as the denotational semantics of computer programs.
Two important functionals can be defined in terms of iterated functions. These are summation:
b | |
\left\{b+1,\sum | |
i=a |
g(i)\right\}\equiv\left(\{i,x\} → \{i+1,x+g(i)\}\right)b-a+1\{a,0\}
and the equivalent product:
b | |
\left\{b+1,\prod | |
i=a |
g(i)\right\}\equiv\left(\{i,x\} → \{i+1,xg(i)\}\right)b-a+1\{a,1\}
The functional derivative of an iterated function is given by the recursive formula:
\deltafN(x) | |
\deltaf(y) |
=f'(fN-1(x))
\deltafN-1(x) | |
\deltaf(y) |
+\delta(fN-1(x)-y)
Iterated functions crop up in the series expansion of combined functions, such as .
Given the iteration velocity, or beta function (physics),
v(x)=\left.
\partialfn(x) | |
\partialn |
\right|n=0
g(f(x))=\exp\left[v(x)
\partial | |
\partialx |
\right]g(x).
Conversely, one may specify given an arbitrary, through the generic Abel equation discussed above,
f(x)=h-1(h(x)+1),
h(x)=\int
1 | |
v(x) |
dx.
fn(x)=h-1(h(x)+n)~.
For continuous iteration index, then, now written as a subscript, this amounts to Lie's celebrated exponential realization of a continuous group,
| ||||
e |
g(x)=g(h-1(h(x)+t))=g(ft(x)).
ft(f\tau(x))=ft+\tau(x)~.