In mathematics, a singular perturbation problem is a problem containing a small parameter that cannot be approximated by setting the parameter value to zero. More precisely, the solution cannot be uniformly approximated by an asymptotic expansion
\varphi(x) ≈
N | |
\sum | |
n=0 |
\deltan(\varepsilon)\psin(x)
as
\varepsilon\to0
\varepsilon
\deltan(\varepsilon)
\varepsilon
\deltan(\varepsilon)=\varepsilonn
The term "singular perturbation" wascoined in the 1940s by Kurt Otto Friedrichs and Wolfgang R. Wasow.
A perturbed problem whose solution can be approximated on the whole problem domain, whether space or time, by a single asymptotic expansion has a regular perturbation. Most often in applications, an acceptable approximation to a regularly perturbed problem is found by simply replacing the small parameter
\varepsilon
\varepsilon
Singular perturbation theory is a rich and ongoing area of exploration for mathematicians, physicists, and other researchers. The methods used to tackle problems in this field are many. The more basic of these include the method of matched asymptotic expansions and WKB approximation for spatial problems, and in time, the Poincaré–Lindstedt method, the method of multiple scales and periodic averaging.The numerical methods for solving singular perturbation problems are also very popular.[1]
For books on singular perturbation in ODE and PDE's, see for example Holmes, Introduction to Perturbation Methods,[2] Hinch, Perturbation methods[3] or Bender and Orszag, Advanced Mathematical Methods for Scientists and Engineers.[4]
Each of the examples described below shows how a naive perturbation analysis, which assumes that the problem is regular instead of singular, will fail. Some show how the problem may be solved by more sophisticated singular methods.
Differential equations that contain a small parameter that premultiplies the highest order term typically exhibit boundary layers, so that the solution evolves in two different scales. For example, consider the boundary value problem
\begin{matrix} \varepsilonu\prime(x)+u\prime(x)=-e-x, 0<x<1\\ u(0)=0, u(1)=1. \end{matrix}
Its solution when
\varepsilon=0.1
\varepsilon=0
An electrically driven robot manipulator can have slower mechanical dynamics and faster electrical dynamics, thus exhibiting two time scales. In such cases, we can divide the system into two subsystems, one corresponding to faster dynamics and other corresponding to slower dynamics, and then design controllers for each one of them separately. Through a singular perturbation technique, we can make these two subsystems independent of each other, thereby simplifying the control problem.
Consider a class of system described by the following set of equations:
x |
1=f1(x1,x2)+\varepsilong1(x1,x2,\varepsilon),
\varepsilonx |
2=f2(x1,x2)+\varepsilong2(x1,x2,\varepsilon),
x1(0)=a1,x2(0)=a2,
with
0<\varepsilon\ll1
x2
x1
x |
1=f1(x1,x2),
f2(x1,x2)=0,
x1(0)=a1,
on some interval of time and that, as
\varepsilon
In fluid mechanics, the properties of a slightly viscous fluid are dramatically different outside and inside a narrow boundary layer. Thus the fluid exhibits multiple spatial scales.
Reaction–diffusion systems in which one reagent diffuses much more slowly than another can form spatial patterns marked by areas where a reagent exists, and areas where it does not, with sharp transitions between them. In ecology, predator-prey models such as
ut=\varepsilonuxx+uf(u)-vg(u),
vt=vxx+vh(u),
where
u
v
Consider the problem of finding all roots of the polynomial
p(x)=\varepsilonx3-x2+1
\varepsilon\to0
1-x2
x=\pm1
x(\varepsilon)=x0+\varepsilonx1+\varepsilon2x2+ …
in the equation and equating equal powers of
\varepsilon
x(\varepsilon)=\pm1+
1 | |
2 |
\varepsilon\pm
5 | |
8 |
\varepsilon2+ … .
To find the other root, singular perturbation analysis must be used. We must then deal with the fact that the equation degenerates into a quadratic when we let
\varepsilon
x
y=x\varepsilon\nu
\nu
y
\varepsilon
y
y3-\varepsilon\nu-1y2+\varepsilon3\nu=0.
We can see that for
\nu<1
y3
\nu=1
y2
\varepsilon
y3-y2+\varepsilon2=0.
Substituting the perturbation series
y(\varepsilon)=y0+\varepsilon2y1+\varepsilon4y2+ …
yields
3 | |
y | |
0 |
-
2 | |
y | |
0 |
=0.
We are then interested in the root at
y0=1
y0=0
x(\varepsilon)=
y(\varepsilon) | |
\varepsilon |
=
1 | |
\varepsilon |
-\varepsilon-2\varepsilon3+ … .