General linear methods (GLMs) are a large class of numerical methods used to obtain numerical solutions to ordinary differential equations. They include multistage Runge–Kutta methods that use intermediate collocation points, as well as linear multistep methods that save a finite time history of the solution. John C. Butcher originally coined this term for these methods and has written a series of review papers,[1] [2] [3] a book chapter,[4] and a textbook[5] on the topic. His collaborator, Zdzislaw Jackiewicz also has an extensive textbook[6] on the topic. The original class of methods were originally proposed by Butcher (1965), Gear (1965) and Gragg and Stetter (1964).
Numerical methods for first-order ordinary differential equations approximate solutions to initial value problems of the form
y'=f(t,y), y(t0)=y0.
The result is approximations for the value of
y(t)
ti
yi ≈ y(ti) where ti=t0+ih,
where h is the time step (sometimes referred to as
\Deltat
We follow Butcher (2006), pp. 189–190 for our description, although we note that this method can be found elsewhere.
General linear methods make use of two integers:
r
s
r=1
s=1
Stage values
Yi
Fi, i=1,2,...s
[n-1] | |
y | |
i |
, i=1,...,r
n
y[n-1]=
[n-1] | |
\left[ \begin{matrix} y | |
1 |
[n-1] | |
\\ y | |
2 |
\\ \vdots
[n-1] | |
\\ y | |
r |
\\ \end{matrix} \right], y[n]=
[n] | |
\left[ \begin{matrix} y | |
1 |
[n] | |
\\ y | |
2 |
\\ \vdots
[n] | |
\\ y | |
r |
\\ \end{matrix} \right], Y=\left[ \begin{matrix} Y1\\ Y2\\ \vdots\\ Ys \end{matrix} \right], F=\left[ \begin{matrix} F1\\ F2\\ \vdots\\ Fs \end{matrix} \right] =\left[ \begin{matrix} f(Y1)\\ f(Y2)\\ \vdots\\ f(Ys) \end{matrix} \right].
The stage values are defined by two matrices
A=[aij]
U=[uij]
Yi=
s | |
\sum | |
j=1 |
aijhFj+
r | |
\sum | |
j=1 |
uij
[n-1] | |
y | |
j |
, i=1,2,...,s,
and the update to time
tn
B=[bij]
V=[vij]
[n] | |
y | |
i |
=
s | |
\sum | |
j=1 |
bijhFj+
r | |
\sum | |
j=1 |
vij
[n-1] | |
y | |
j |
, i=1,2,...,r.
Given the four matrices
A,U,B
V
\left[\begin{matrix} Y\\ y[n]\end{matrix}\right] = \left[\begin{matrix} A ⊗ I&U ⊗ I\ B ⊗ I&V ⊗ I \end{matrix}\right] \left[\begin{matrix} hF\\ y[n-1]\end{matrix}\right],
⊗
We present an example described in (Butcher, 1996).[7] This method consists of a single "predicted" step and "corrected" step, which uses extra information about the time history, as well as a single intermediate stage value.
An intermediate stage value is defined as something that looks like it came from a linear multistep method:
* | |
y | |
n-1/2 |
=yn-2+h\left(
9 | |
8 |
f(yn-1)+
3 | |
8 |
f(yn-2)\right).
An initial "predictor"
* | |
y | |
n |
* | |
y | |
n-1/2 |
* | |
y | |
n |
=
28 | |
5 |
yn-1-
23 | |
5 |
yn-2+h\left(
32 | |
15 |
f(
* | |
y | |
n-1/2 |
)-4f(yn-1)-
26 | |
15 |
f(yn-2)\right),
and the final update is given by
yn=
32 | |
31 |
yn-1-
1 | |
31 |
yn-2+h\left(
5 | |
31 |
f(
* | |
y | |
n |
)+
64 | |
93 |
f(
* | |
y | |
n-1/2 |
)+
4 | |
31 |
f(yn-1)-
1 | |
93 |
f(yn-2) \right).
The concise table representation for this method is given by
\left[\begin{array}{ccc|cccc} 0&0&0&0&1&
9 | |
8 |
&
3 | \\ | |
8 |
32 | |
15 |
&0&0&
28 | |
5 |
&-
23 | |
5 |
&-4&-
26 | \\ | |
15 |
64 | |
93 |
&
5 | |
31 |
&0&
32 | |
31 |
&-
1 | |
31 |
&
4 | |
31 |
&-
1 | |
93 |
\\ \hline
64 | |
93 |
&
5 | |
31 |
&0&
32 | |
31 |
&-
1 | |
31 |
&
4 | |
31 |
&-
1 | |
93 |
\\ 0&0&0&1&0&0&0\\ 0&0&1&0&0&0&0\\ 0&0&0&0&0&1&0\\ \end{array} \right].