Order of accuracy explained

In numerical analysis, order of accuracy quantifies the rate of convergence of a numerical approximation of a differential equation to the exact solution.Consider

u

, the exact solution to a differential equation in an appropriate normed space

(V,||||)

. Consider a numerical approximation

uh

, where

h

is a parameter characterizing the approximation, such as the step size in a finite difference scheme or the diameter of the cells in a finite element method.The numerical solution

uh

is said to be

n

th-order accurate
if the error

E(h):=||u-uh||

is proportional to the step-size

h

to the

n

th power:[1]

E(h)=||u-uh||\leqChn

where the constant

C

is independent of

h

and usually depends on the solution

u

.[2] Using the big O notation an

n

th-order accurate numerical method is notated as

||u-uh||=O(hn)

This definition is strictly dependent on the norm used in the space; the choice of such norm is fundamental to estimate the rate of convergence and, in general, all numerical errors correctly.

The size of the error of a first-order accurate approximation is directly proportional to

h

.Partial differential equations which vary over both time and space are said to be accurate to order

n

in time and to order

m

in space.[3]

Notes and References

  1. Book: LeVeque, Randall J. Finite Difference Methods for Differential Equations. 2006. University of Washington. 3–5. 10.1.1.111.1693.
  2. Book: Ciarliet, Philippe J. The Finite Element Method for Elliptic Problems. 1978. Elsevier. 105–106. 10.1137/1.9780898719208. 978-0-89871-514-9.
  3. Book: Strikwerda, John C. Finite Difference Schemes and Partial Differential Equations. 2. 2004. 978-0-898716-39-9. 62–66.