Error function explained

In mathematics, the error function (also called the Gauss error function), often denoted by , is a function defined as:[1] \operatorname z = \frac\int_0^z e^\,\mathrm dt.

Error function
Imagealt:Plot of the error function
General Definition:

\operatorname{erf}z=

2
\sqrt\pi
z
\int
0
-t2
e

dt

Fields Of Application:Probability, thermodynamics, digital communications
Domain:

R

Range:

\left(-1,1\right)

Parity:Odd
Root:0
Derivative:
d
dz

\operatorname{erf}z=

2
\sqrt\pi
-z2
e

Antiderivative:

\int\operatorname{erf}zdz=z\operatorname{erf}z+

-z2
e
\sqrt\pi

+C

Taylor Series:

\operatorname{erf}z=

2
\sqrt\pi
infty
\sum
n=0
z
2n+1
n
\prod
k=1
-z2
k

In some old texts,

\operatorname{erf}

is defined without the factor of

2/\sqrt{\pi}

.[2] This nonelementary integral is a sigmoid function that occurs often in probability, statistics, and partial differential equations. In many of these applications, the function argument is a real number. If the function argument is real, then the function value is also real.

1/\sqrt{2}

, is the probability that falls in the range .

Two closely related functions are the complementary error function () defined as\operatorname z = 1 - \operatorname z,and the imaginary error function () defined as\operatorname z = -i\operatorname iz,where is the imaginary unit.

Name

The name "error function" and its abbreviation were proposed by J. W. L. Glaisher in 1871 on account of its connection with "the theory of Probability, and notably the theory of Errors."[3] The error function complement was also discussed by Glaisher in a separate publication in the same year.[4] For the "law of facility" of errors whose density is given byf(x) = \left(\frac\right)^ e^(the normal distribution), Glaisher calculates the probability of an error lying between and as:\left(\frac\right)^\frac \int_p^qe^\,\mathrm dx = \tfrac\left(\operatorname \left(q\sqrt\right) -\operatorname \left(p\sqrt\right)\right).

Applications

When the results of a series of measurements are described by a normal distribution with standard deviation and expected value 0, then is the probability that the error of a single measurement lies between and, for positive . This is useful, for example, in determining the bit error rate of a digital communication system.

The error and complementary error functions occur, for example, in solutions of the heat equation when boundary conditions are given by the Heaviside step function.

The error function and its approximations can be used to estimate results that hold with high probability or with low probability. Given a random variable (a normal distribution with mean and standard deviation) and a constant, it can be shown via integration by substitution:\begin\Pr[X\leq L] &= \frac + \frac \operatorname\frac \\&\approx A \exp \left(-B \left(\frac\right)^2\right)\end

where and are certain numeric constants. If is sufficiently far from the mean, specifically, then:

\Pr[X\leq L] \leq A \exp (-B \ln) = \frac

so the probability goes to 0 as .

The probability for being in the interval can be derived as\begin\Pr[L_a\leq X \leq L_b] &= \int_^ \frac \exp\left(-\frac\right) \,\mathrm dx \\&= \frac\left(\operatorname\frac - \operatorname\frac\right).\end

Properties

The property means that the error function is an odd function. This directly results from the fact that the integrand is an even function (the antiderivative of an even function which is zero at the origin is an odd function and vice versa).

Since the error function is an entire function which takes real numbers to real numbers, for any complex number :\operatorname \overline = \overline where is the complex conjugate of z.

The integrand and are shown in the complex -plane in the figures at right with domain coloring.

The error function at is exactly 1 (see Gaussian integral). At the real axis, approaches unity at and −1 at . At the imaginary axis, it tends to .

Taylor series

The error function is an entire function; it has no singularities (except that at infinity) and its Taylor expansion always converges, but is famously known "[...] for its bad convergence if ."[5]

The defining integral cannot be evaluated in closed form in terms of elementary functions (see Liouville's theorem), but by expanding the integrand into its Maclaurin series and integrating term by term, one obtains the error function's Maclaurin series as:\begin\operatorname z&= \frac\sum_^\infty\frac \\[6pt]&= \frac \left(z-\frac+\frac-\frac+\frac-\cdots\right)\endwhich holds for every complex number . The denominator terms are sequence A007680 in the OEIS.

For iterative calculation of the above series, the following alternative formulation may be useful:\begin\operatorname z&= \frac\sum_^\infty\left(z \prod_^n \right) \\[6pt]&= \frac \sum_^\infty \frac \prod_^n \frac\endbecause expresses the multiplier to turn the th term into the th term (considering as the first term).

The imaginary error function has a very similar Maclaurin series, which is:\begin\operatorname z &= \frac\sum_^\infty\frac \\[6pt] &=\frac \left(z+\frac+\frac+\frac+\frac+\cdots\right)\endwhich holds for every complex number .

Derivative and integral

The derivative of the error function follows immediately from its definition:\frac\operatorname z =\frac e^.From this, the derivative of the imaginary error function is also immediate:\frac\operatorname z =\frac e^.An antiderivative of the error function, obtainable by integration by parts, isz\operatornamez + \frac.An antiderivative of the imaginary error function, also obtainable by integration by parts, isz\operatornamez - \frac.Higher order derivatives are given by\operatorname^z = \frac \mathit_(z) e^ = \frac \frac \left(e^\right),\qquad k=1, 2, \dotswhere are the physicists' Hermite polynomials.

Bürmann series

An expansion,[6] which converges more rapidly for all real values of than a Taylor expansion, is obtained by using Hans Heinrich Bürmann's theorem:\begin\operatorname x&= \frac \sgn x \cdot \sqrt \left(1-\frac \left (1-e^ \right) -\frac \left (1-e^ \right)^2 -\frac \left (1-e^ \right)^3-\frac \left (1-e^ \right)^4 - \cdots \right) \\[10pt]&= \frac \sgn x \cdot \sqrt \left(\frac + \sum_^\infty c_k e^ \right).\endwhere is the sign function. By keeping only the first two coefficients and choosing and, the resulting approximation shows its largest relative error at, where it is less than 0.0036127:\operatorname x \approx \frac\sgn x \cdot \sqrt \left(\frac + \frace^-\frac e^\right).

Inverse functions

Given a complex number, there is not a unique complex number satisfying, so a true inverse function would be multivalued. However, for, there is a unique real number denoted satisfying\operatorname\left(\operatorname^ x\right) = x.

The inverse error function is usually defined with domain, and it is restricted to this domain in many computer algebra systems. However, it can be extended to the disk of the complex plane, using the Maclaurin series[7] \operatorname^ z=\sum_^\infty\frac\left (\fracz\right)^,where and\beginc_k & =\sum_^\frac \\[1ex]&= \left\.\end

So we have the series expansion (common factors have been canceled from numerators and denominators):\operatorname^ z = \frac \left (z + \fracz^3 + \fracz^5 + \fracz^7 + \frac z^9 + \fracz^ + \cdots\right).(After cancellation the numerator/denominator fractions are entries / in the OEIS; without cancellation the numerator terms are given in entry .) The error function's value at  is equal to .

For, we have .

The inverse complementary error function is defined as\operatorname^(1-z) = \operatorname^ z.For real, there is a unique real number satisfying . The inverse imaginary error function is defined as .[8]

For any real x, Newton's method can be used to compute, and for, the following Maclaurin series converges:\operatorname^ z =\sum_^\infty\frac \left(\frac z \right)^,where is defined as above.

Asymptotic expansion

A useful asymptotic expansion of the complementary error function (and therefore also of the error function) for large real is\begin\operatorname x &= \frac\left(1 + \sum_^\infty (-1)^n \frac\right) \\[6pt] &= \frac\sum_^\infty (-1)^n \frac,\endwhere is the double factorial of, which is the product of all odd numbers up to . This series diverges for every finite, and its meaning as asymptotic expansion is that for any integer one has\operatorname x = \frac\sum_^ (-1)^n \frac + R_N(x)where the remainder isR_N(x) := \frac 2^\frac\int_x^\infty t^e^\,\mathrm dt,which follows easily by induction, writinge^ = -(2t)^\left(e^\right)'and integrating by parts.

The asymptotic behavior of the remainder term, in Landau notation, isR_N(x) = O\left(x^ e^\right)as . This can be found by R_N(x) \propto \int_x^\infty t^e^\,\mathrm dt = e^ \int_0^\infty (t+x)^e^\,\mathrm dt\leq e^ \int_0^\infty x^ e^\,\mathrm dt \propto x^e^.For large enough values of, only the first few terms of this asymptotic expansion are needed to obtain a good approximation of (while for not too large values of, the above Taylor expansion at 0 provides a very fast convergence).

Continued fraction expansion

A continued fraction expansion of the complementary error function is:[9] \operatorname z = \frace^ \cfrac,\qquad a_m = \frac.

Factorial series

The inverse factorial series:\begin\operatorname z&= \frac \sum_^\infty \frac \\[1ex]&= \frac \left[1 -\frac{1}{2}\frac{1}{(z^2+1)} + \frac{1}{4}\frac{1}{\left(z^2+1\right) \left(z^2+2\right)} - \cdots \right]\endconverges for . Here\beginQ_n&\overset\frac \int_0^\infty \tau(\tau-1)\cdots(\tau-n+1)\tau^ e^ \,d\tau \\[1ex]&= \sum_^n \left(\frac\right)^ s(n,k),\end denotes the rising factorial, and denotes a signed Stirling number of the first kind.[10] [11] There also exists a representation by an infinite sum containing the double factorial:\operatorname z = \frac \sum_^\infty \fracz^

Numerical approximations

Approximation with elementary functions

Table of values

0
0.02
0.04
0.06
0.08
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
2.1
2.2
2.3
2.4
2.5
3
3.5

Related functions

Complementary error function

The complementary error function, denoted, is defined as\begin\operatorname x& = 1-\operatorname x \\[5pt]& = \frac \int_x^\infty e^\,\mathrm dt \\[5pt]& = e^ \operatorname x,\end which also defines, the scaled complementary error function (which can be used instead of to avoid arithmetic underflow). Another form of for is known as Craig's formula, after its discoverer:[23] \operatorname (x \mid x\ge 0)= \frac \int_0^\frac \exp \left(- \frac \right) \, \mathrm d\theta.This expression is valid only for positive values of, but it can be used in conjunction with to obtain for negative values. This form is advantageous in that the range of integration is fixed and finite. An extension of this expression for the of the sum of two non-negative variables is as follows:[24] \operatorname (x+y \mid x,y\ge 0) = \frac \int_0^\frac \exp \left(- \frac - \frac \right) \,\mathrm d\theta.

Imaginary error function

The imaginary error function, denoted, is defined as\begin\operatorname x & = -i\operatorname ix \\[5pt]& = \frac \int_0^x e^\,\mathrm dt \\[5pt]& = \frac e^ D(x),\end where is the Dawson function (which can be used instead of to avoid arithmetic overflow).

Despite the name "imaginary error function", is real when is real.

When the error function is evaluated for arbitrary complex arguments, the resulting complex error function is usually discussed in scaled form as the Faddeeva function:w(z) = e^\operatorname(-iz) = \operatorname(-iz).

Cumulative distribution function

The error function is essentially identical to the standard normal cumulative distribution function, denoted, also named by some software languages, as they differ only by scaling and translation. Indeed,\begin\Phi(x) &= \frac \int_^x e^\tfrac\,\mathrm dt\\[6pt] &= \frac \left(1+\operatorname\frac\right)\\[6pt]&= \frac \operatorname\left(-\frac\right)\endor rearranged for and :\begin \operatorname(x) &= 2 \Phi \left (x \sqrt \right) - 1 \\[6pt] \operatorname(x) &= 2 \Phi \left (- x \sqrt \right) \\ &=2\left(1-\Phi \left (x \sqrt \right)\right).\end

Consequently, the error function is also closely related to the Q-function, which is the tail probability of the standard normal distribution. The Q-function can be expressed in terms of the error function as\beginQ(x) &= \frac - \frac \operatorname \frac\\&= \frac\operatorname\frac.\end

The inverse of is known as the normal quantile function, or probit function and may be expressed in terms of the inverse error function as\operatorname(p) = \Phi^(p) = \sqrt\operatorname^(2p-1) = -\sqrt\operatorname^(2p).

The standard normal cdf is used more often in probability and statistics, and the error function is used more often in other branches of mathematics.

The error function is a special case of the Mittag-Leffler function, and can also be expressed as a confluent hypergeometric function (Kummer's function):\operatorname x = \frac M\left(\tfrac,\tfrac,-x^2\right).

It has a simple expression in terms of the Fresnel integral.

In terms of the regularized gamma function and the incomplete gamma function,\operatorname x= \sgn x \cdot P\left(\tfrac, x^2\right)= \frac \gamma. is the sign function.

Iterated integrals of the complementary error function

The iterated integrals of the complementary error function are defined by[25] \begini^n\!\operatorname z &= \int_z^\infty i^\!\operatorname \zeta\,\mathrm d\zeta \\[6pt]i^0\!\operatorname z &= \operatorname z \\i^1\!\operatorname z &= \operatorname z = \frac e^ - z \operatorname z \\i^2\!\operatorname z &= \tfrac \left(\operatorname z -2 z \operatorname z \right) \\\end

The general recurrence formula is2 n \cdot i^n\!\operatorname z = i^\!\operatorname z -2 z \cdot i^\!\operatorname z

They have the power seriesi^n\!\operatorname z =\sum_^\infty \frac,from which follow the symmetry propertiesi^\!\operatorname (-z) =-i^\!\operatorname z +\sum_^m \fracandi^\!\operatorname(-z) =i^\!\operatorname z +\sum_^m \frac.

Implementations

As real function of a real argument

As complex function of a complex argument

External links

Notes and References

  1. Book: Andrews, Larry C.. Special functions of mathematics for engineers. 110. SPIE Press . 1998. 9780819426161.
  2. Book: Whittaker . E. T. . Watson . G. N.. 978-0-521-58807-2. 341. Cambridge University Press. A Course of Modern Analysis. A Course of Modern Analysis. 1927.
  3. Glaisher. James Whitbread Lee. On a class of definite integrals. London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. July 1871 . 42 . 294–302. 6 December 2017. 277 . 4 . 10.1080/14786447108640568.
  4. Glaisher. James Whitbread Lee. On a class of definite integrals. Part II. London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. September 1871 . 42. 421–436. 6 December 2017. 4 . 279 . 10.1080/14786447108640600.
  5. Web site: A007680 – OEIS. oeis.org. 2020-04-02.
  6. H. M. . Schöpf . P. H. . Supancic . On Bürmann's Theorem and Its Application to Problems of Linear and Nonlinear Heat Transfer and Diffusion . The Mathematica Journal . 2014 . 16 . 10.3888/tmj.16-11 . free .
  7. Dominici . Diego . Asymptotic analysis of the derivatives of the inverse error function . math/0607230 . 2006.
  8. Bergsma . Wicher . On a new correlation coefficient, its orthogonal decomposition and associated tests of independence . math/0604627 . 2006.
  9. Book: Cuyt . Annie A. M.. Annie Cuyt . Petersen . Vigdis B. . Verdonk . Brigitte . Waadeland . Haakon . Jones . William B. . Handbook of Continued Fractions for Special Functions . Springer-Verlag . 2008 . 978-1-4020-6948-2 .
  10. Schlömilch. Oskar Xavier . Oscar Schlömilch. 1859. Ueber facultätenreihen. . de . 4 . 390–415.
  11. Book: Nielson, Niels . Handbuch der Theorie der Gammafunktion . 1906 . B. G. Teubner . Leipzig. de. 2017-12-04. p. 283 Eq. 3.
  12. New Exponential Bounds and Approximations for the Computation of Error Probability in Fading Channels. Chiani. M.. Dardari. D. . Simon . M.K.. 2003 . IEEE Transactions on Wireless Communications. 2. 4. 840–845. 10.1109/TWC.2003.814350 . 10.1.1.190.6761.
  13. 10.1109/TCOMM.2020.3006902 . Global minimax approximations and bounds for the Gaussian Q-function by sums of exponentials. IEEE Transactions on Communications . 2020 . Tanash . I.M. . Riihonen . T. . 68 . 10 . 6514–6524 . 2007.06939 . 220514754.
  14. 10.5281/zenodo.4112978 . Coefficients for Global Minimax Approximations and Bounds for the Gaussian Q-Function by Sums of Exponentials [Data set] ]. Zenodo . 2020 . Tanash . I.M. . Riihonen . T..
  15. Karagiannidis . G. K. . Lioumpas . A. S. . An improved approximation for the Gaussian Q-function . 2007 . IEEE Communications Letters . 11 . 8 . 644–646. 10.1109/LCOMM.2007.070470 . 4043576 .
  16. 10.1109/LCOMM.2021.3052257. Improved coefficients for the Karagiannidis–Lioumpas approximations and bounds to the Gaussian Q-function. IEEE Communications Letters . 2021 . Tanash . I.M.. Riihonen. T.. 25. 5. 1468–1471. 2101.07631. 231639206.
  17. Chang . Seok-Ho . Cosman . Pamela C. . Pamela Cosman . Milstein . Laurence B. . November 2011 . Chernoff-Type Bounds for the Gaussian Error Function . IEEE Transactions on Communications . 59 . 11 . 2939–2944 . 10.1109/TCOMM.2011.072011.100049 . 13636638.
  18. Book: Winitzki, Sergei . Computational Science and Its Applications – ICCSA 2003 . 2003 . 2667 . Uniform approximations for transcendental functions . Springer, Berlin . 780–789 . 978-3-540-40155-1 . 10.1007/3-540-44839-X_82 . registration . https://archive.org/details/computationalsci0000iccs_a2w6 . Lecture Notes in Computer Science .
  19. Zeng . Caibin . Chen . Yang Cuan . Global Padé approximations of the generalized Mittag-Leffler function and its inverse . Fractional Calculus and Applied Analysis . 2015 . 18 . 6 . 1492–1506 . 10.1515/fca-2015-0086 . Indeed, Winitzki [32] provided the so-called global Padé approximation . 1310.5592 . 118148950 .
  20. Web site:
  21. Book: Press, William H. . Numerical Recipes in Fortran 77: The Art of Scientific Computing . 0-521-43064-X . 1992 . 214 . Cambridge University Press .
  22. Dia . Yaya D. . 2023 . Approximate Incomplete Integrals, Application to Complementary Error Function . SSRN Electronic Journal . en . 10.2139/ssrn.4487559 . 1556-5068.
  23. John W. Craig, A new, simple and exact result for calculating the probability of error for two-dimensional signal constellations, Proceedings of the 1991 IEEE Military Communication Conference, vol. 2, pp. 571–575.
  24. 10.1109/TCOMM.2020.2986209 . A Novel Extension to Craig's Q-Function Formula and Its Application in Dual-Branch EGC Performance Analysis. IEEE Transactions on Communications . 68 . 7 . 4117–4125 . 2020 . Behnad . Aydin . 216500014.
  25. Book: Carslaw . H. S. . Horatio Scott Carslaw . Jaeger . J. C.. John Conrad Jaeger . 1959 . Conduction of Heat in Solids . 2nd . Oxford University Press . 978-0-19-853368-9 . 484.
  26. Web site: math.h - mathematical declarations . 21 April 2023 . opengroup.org . 2018 . 7.
  27. Web site: Special Functions – GSL 2.7 documentation.