Parker–Sochacki method explained

In mathematics, the Parker–Sochacki method is an algorithm for solving systems of ordinary differential equations (ODEs), developed by G. Edgar Parker and James Sochacki, of the James Madison University Mathematics Department. The method produces Maclaurin series solutions to systems of differential equations, with the coefficients in either algebraic or numerical form.

Summary

The Parker–Sochacki method rests on two simple observations:

Several coefficients of the power series are calculated in turn, a time step is chosen, the series is evaluated at that time, and the process repeats.

The end result is a high order piecewise solution to the original ODE problem. The order of the solution desired is an adjustable variable in the program that can change between steps. The order of the solution is only limited by the floating point representation on the machine running the program. And in some cases can be either extended by using arbitrary precision floating point numbers, or for special cases by finding solution with only integer or rational coefficients.

Advantages

The method requires only addition, subtraction, and multiplication, making it very convenient for high-speed computation. (The only divisions are inverses of small integers, which can be precomputed.)Use of a high order—calculating many coefficients of the power series—is convenient. (Typically a higher order permits a longer time step without loss of accuracy, which improves efficiency.)The order and step size can be easily changed from one step to the next.It is possible to calculate a guaranteed error bound on the solution.Arbitrary precision floating point libraries allow this method to compute arbitrarily accurate solutions.

With the Parker–Sochacki method, information between integration steps is developed at high order. As the Parker–Sochacki method integrates, the program can be designed to save the power series coefficients that provide a smooth solution between points in time. The coefficients can be saved and used so that polynomial evaluation provides the high order solution between steps. With most other classical integration methods, one would have to resort to interpolation to get information between integration steps, leading to an increase of error.

There is an a priori error bound for a single step with the Parker–Sochacki method.[1] This allows a Parker–Sochacki program to calculate the step size that guarantees that the error is below any non-zero given tolerance. Using this calculated step size with an error tolerance of less than half of the machine epsilon yields a symplectic integration.

Disadvantages

Most methods for numerically solving ODEs require only the evaluation of derivatives for chosen values of the variables, so systems like MATLAB include implementations of several methods all sharing the same calling sequence. Users can try different methods by simply changing the name of the function called. The Parker–Sochacki method requires more work to put the equations into the proper form, and cannot use the same calling sequence.

External links

Notes and References

  1. P.G. Warne . D.P. Warne . J.S. Sochacki . G.E. Parker . D.C Carothers . Explicit a-priori error bounds and adaptive error control for approximation of nonlinear initial value differential systems . Computers & Mathematics with Applications . 52 . 12 . 1695–1710 . August 27, 2017 . 10.1016/j.camwa.2005.12.004 . 2006. free .