In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise definition of stiffness, but the main idea is that the equation includes some terms that can lead to rapid variation in the solution.
When integrating a differential equation numerically, one would expect the requisite step size to be relatively small in a region where the solution curve displays much variation and to be relatively large where the solution curve straightens out to approach a line with slope nearly zero. For some problems this is not the case. In order for a numerical method to give a reliable solution to the differential system sometimes the step size is required to be at an unacceptably small level in a region where the solution curve is very smooth. The phenomenon is known as stiffness. In some cases there may be two different problems with the same solution, yet one is not stiff and the other is. The phenomenon cannot therefore be a property of the exact solution, since this is the same for both problems, and must be a property of the differential system itself. Such systems are thus known as stiff systems.
Consider the initial value problemThe exact solution (shown in cyan) is
We seek a numerical solution that exhibits the same behavior.
The figure (right) illustrates the numerical issues for various numerical integrators applied on the equation.
One of the most prominent examples of the stiff ordinary differential equations (ODEs) is a system that describes the chemical reaction of Robertson:[1] If one treats this system on a short interval, for example,
t\in[0,40]
Consider the linear constant coefficient inhomogeneous system
where
y,f\inRn
A
n x n
λt\inC,t=1,2,\ldots,n
ct\inCn,t=1,2,\ldots,n
where the
\kappat
g(x)
which implies that each of the terms
λtx | |
e |
ct\to0
x\toinfty
y(x)
g(x)
x\toinfty
λtx | |
e |
ct
λt
λt
Interpreting
x
g(x)
\left|\operatorname{Re}(λt)\right|
\kappat
λtx | |
e |
ct
x
\left|\operatorname{Re}(λt)\right|
\kappat
λtx | |
e |
ct
\overline{λ},\underline{λ} \in\{λt,t=1,2,\ldots,n\}
so that
\kappate\overline{λx}ct
\kappate\underline{λx}ct
In this section we consider various aspects of the phenomenon of stiffness. "Phenomenon" is probably a more appropriate word than "property", since the latter rather implies that stiffness can be defined in precise mathematical terms; it turns out not to be possible to do this in a satisfactory manner, even for the restricted class of linear constant coefficient systems. We shall also see several qualitative statements that can be (and mostly have been) made in an attempt to encapsulate the notion of stiffness, and state what is probably the most satisfactory of these as a "definition" of stiffness.
J. D. Lambert defines stiffness as follows:
If a numerical method with a finite region of absolute stability, applied to a system with any initial conditions, is forced to use in a certain interval of integration a step length which is excessively small in relation to the smoothness of the exact solution in that interval, then the system is said to be stiff in that interval.
There are other characteristics which are exhibited by many examples of stiff problems, but for each there are counterexamples, so these characteristics do not make good definitions of stiffness. Nonetheless, definitions based upon these characteristics are in common use by some authors and are good clues as to the presence of stiffness. Lambert refers to these as "statements" rather than definitions, for the aforementioned reasons. A few of these are:
The origin of the term "stiffness" has not been clearly established. According to Joseph Oakland Hirschfelder, the term "stiff" is used because such systems correspond to tight coupling between the driver and driven in servomechanisms.According to Richard. L. Burden and J. Douglas Faires,
Significant difficulties can occur when standard numerical techniques are applied to approximate the solution of a differential equation when the exact solution contains terms of the formFor example, the initial value problemwith, whereeλ
is a complex number with negative real part.. . .Problems involving rapidly decaying transient solutions occur naturally in a wide variety of applications, including the study of spring and damping systems, the analysis of control systems, and problems in chemical kinetics. These are all examples of a class of problems called stiff (mathematical stiffness) systems of differential equations, due to their application in analyzing the motion of spring and mass systems having large spring constants (physical stiffness).λ
m=1
c=1001
k=1000
n=2
\overline{λ}=-1000,\underline{λ}=-1
k
c
x0e-t
e-1000t
The behaviour of numerical methods on stiff problems can be analyzed by applying these methods to the test equation
y'=ky
y(0)=1
k\inC
y(t)=ekt
t\toinfty
\operatorname{Re}(k)<0.
Runge–Kutta methods applied to the test equation
y'=k ⋅ y
yn+1=\phi(hk) ⋅ yn
yn=l(\phi(hk)r)n ⋅ y0
\phi
yn\to0
n\toinfty
|\phi(hk)|<1
l\{z\inC||\phi(z)|<1r\}
l\{z\in\Complex|\operatorname{Re}(z)<0r\}
Consider the Euler methods above. The explicit Euler method applied to the test equation
y'=k ⋅ y
yn+1=yn+h ⋅ f(tn,yn)=yn+h ⋅ (kyn)=yn+h ⋅ k ⋅ yn=(1+h ⋅ k)yn.
yn=(1+hk)n ⋅ y0
\phi(z)=1+z
l\{z\inC||1+z|<1r\}
The motivating example had
k=-15
h=\tfrac14
z=-15 x \tfrac14=-3.75
h=\tfrac18
z=-1.875
Consider the trapezoidal method
yn+1=yn+\tfrac{1}{2}h ⋅ l(f(tn,yn)+f(tn+1,yn+1)r),
y'=k ⋅ y
yn+1=yn+\tfrac{1}{2}h ⋅ \left(kyn+kyn+1\right).
yn+1
yn+1=
| |||||
|
⋅ yn.
\phi(z)= |
| |||
|
\left\{z\inC \left| \left|
| ||||
|
\right|<1\right.\right\}.
y'=k ⋅ y
\phi(z)\to1
z\to-infty
|\phi(z)|\to0
z\toinfty
The stability function of a Runge–Kutta method with coefficients
A
b
\phi(z)=
\det\left(I-zA+zebT\right) | |
\det(I-zA) |
,
e
Explicit Runge–Kutta methods have a strictly lower triangular coefficient matrix
A
The stability function of implicit Runge–Kutta methods is often analyzed using order stars. The order star for a method with stability function
\phi
l\{z\in\Complex||\phi(z)|>|ez|r\}
Linear multistep methods have the form
yn+1
s | |
=\sum | |
i=0 |
aiyn-i
s | |
+h\sum | |
j=-1 |
bjf\left(tn-j,yn-j\right).
yn+1
s | |
=\sum | |
i=0 |
aiyn-i
s | |
+hk\sum | |
j=-1 |
bjyn-j,
\left(1-b-1z\right)yn+1
s | |
-\sum | |
j=0 |
\left(aj+bjz\right)yn-j=0
z=hk
\{yn\}
\operatorname{Re}(z)<0
\Phi(z,w)=ws+1
s | |
-\sum | |
i=0 |
s-i | |
a | |
iw |
-
s | |
z\sum | |
j=-1 |
s-j | |
b | |
jw |
.
z
w
\Phi(z,w)=0
The region of absolute stability for a multistep method of the above form is then the set of all
z\inC
w
\Phi(z,w)=0
|w|<1
Let us determine the region of absolute stability for the two-step Adams–Bashforth method
yn+1=yn+h\left(\tfrac32f(tn,yn)-\tfrac12f(tn-1,yn-1)\right).
\Phi(w,z)=w2-\left(1+\tfrac32z\right)w+\tfrac12z=0
w=\tfrac12\left(1+\tfrac32z\pm\sqrt{1+z+\tfrac94z2}\right),
\left\{z\inC \left| \left|\tfrac12\left(1+\tfrac32z\pm\sqrt{1+z+\tfrac94z2}\right)\right|<1\right.\right\}.
-1\lez\le0
Explicit multistep methods can never be A-stable, just like explicit Runge–Kutta methods. Implicit multistep methods can only be A-stable if their order is at most 2. The latter result is known as the second Dahlquist barrier; it restricts the usefulness of linear multistep methods for stiff equations. An example of a second-order A-stable method is the trapezoidal rule mentioned above, which can also be considered as a linear multistep method.[5]