Error function


In mathematics, the error function, often denoted by , is a function defined as:
The integral here is a complex contour integral which is path-independent because is holomorphic on the whole complex plane. In many applications, the function argument is a real number, in which case the function value is also real.
In some old texts,
the error function is defined without the factor of.
This nonelementary integral is a sigmoid function that occurs often in probability, statistics, and partial differential equations.
In statistics, for non-negative real values of, the error function has the following interpretation: for a real random variable that is normally distributed with mean 0 and standard deviation, is the probability that falls in the range.
Two closely related functions are the complementary error function erfc: defined as
and the imaginary error function erfi: defined as
where is the imaginary unit.

Name

The name "error function" and its abbreviation were proposed by J. W. L. Glaisher in 1871 on account of its connection with "the theory of probability, and notably the theory of errors". The error function complement was also discussed by Glaisher in a separate publication in the same year.
For the "law of facility" of errors whose density is given by
, Glaisher calculates the probability of an error lying between and as

Applications

When the results of a series of measurements are described by a normal distribution with standard deviation and expected value 0, then is the probability that the error of a single measurement lies between and, for positive. This is useful, for example, in determining the bit error rate of a digital communication system.
The error and complementary error functions occur, for example, in solutions of the heat equation when boundary conditions are given by the Heaviside step function.
The error function and its approximations can be used to estimate results that hold with high probability or with low probability. Given a random variable and a constant, it can be shown via integration by substitution:
where and are certain numeric constants. If is sufficiently far from the mean, specifically, then:
so the probability goes to 0 as.
The probability for being in the interval can be derived as

Properties

The property means that the error function is an odd function. This directly results from the fact that the integrand is an even function.
Since the error function is an entire function which maps real numbers to real numbers, for any complex number :
where denotes the complex conjugate of.
The integrand and are shown in the complex -plane in the figures at right with domain coloring.
The error function at is exactly 1. At the real axis, approaches unity at and −1 at. At the imaginary axis, it tends to.

Taylor series

The error function is an entire function; it has no singularities and its Taylor expansion always converges. For, however, cancellation of leading terms makes the Taylor expansion unpractical.
The defining integral cannot be evaluated in closed form in terms of elementary functions, but by expanding the integrand into its Maclaurin series and integrating term by term, one obtains the error function's Maclaurin series as:
which holds for every complex number . The denominator terms are sequence A007680 in the OEIS.
It is a special case of Kummer's function:
For iterative calculation of the above series, the following alternative formulation may be useful:
because expresses the multiplier to turn the th term into the th term.
The imaginary error function has a very similar Maclaurin series, which is:
which holds for every complex number .

Derivative and integral

The derivative of the error function follows immediately from its definition:
From this, the derivative of the imaginary error function is also immediate:
Higher order derivatives are given by
where are the physicists' Hermite polynomials.
An antiderivative of the error function, obtainable by integration by parts, is
An antiderivative of the imaginary error function, also obtainable by integration by parts, is

Bürmann series

An expansion which converges more rapidly for all real values of than a Taylor expansion is obtained by using Hans Heinrich Bürmann's theorem:
where is the sign function. By keeping only the first two coefficients and choosing and, the resulting approximation shows its largest relative error at, where it is less than 0.0034361:

Inverse functions

Given a complex number, there is not a unique complex number satisfying, so a true inverse function would be multivalued. However, for, there is a unique real number denoted satisfying
The inverse error function is usually defined with domain, and it is restricted to this domain in many computer algebra systems. However, it can be extended to the disk of the complex plane, using the Maclaurin series
where and
So we have the series expansion :
The error function's value at is equal to .
For, we have.
The inverse complementary error function is defined as
For real, there is a unique real number satisfying. The inverse imaginary error function is defined as.
For any real x, Newton's method can be used to compute, and for, the following Maclaurin series converges:
where is defined as above.

Asymptotic expansion

A useful asymptotic expansion of the complementary error function for large real is
where is the double factorial of, which is the product of all odd numbers up to. This series diverges for every finite, and its meaning as asymptotic expansion is that for any integer one has
where the remainder is
which follows easily by induction, writing
and integrating by parts.
The asymptotic behavior of the remainder term, in Landau notation, is
as. This can be found by
For large enough values of, only the first few terms of this asymptotic expansion are needed to obtain a good approximation of .

Continued fraction expansion

A continued fraction expansion of the complementary error function was found by Laplace:

Factorial series

The inverse factorial series:
converges for. Here
denotes the rising factorial, and denotes a signed Stirling number of the first kind.
The Taylor series can be written in terms of the double factorial:

Bounds and numerical approximations

Approximation with elementary functions

give several approximations of varying accuracy. This allows one to choose the fastest approximation suitable for a given application. In order of increasing accuracy, they are:
where,,,
where,,,
where,,,,,
where,,,,,
One can improve the accuracy of the A&S approximation by extending it with three extra parameters,
where p1 = 0.406742016006509,
p2 = 0.0072279182302319,
a1 = 0.316879890481381,
a2 = -0.138329314150635,
a3 = 1.08680830347054,
a4 = -1.11694155120396,
a5 = 1.20644903073232,
a6 = -0.393127715207728,
a7 = 0.0382613542530727.
The maximum error of this approximation is about. The parameters are obtained by fitting the extended approximation to the accurate values of the error function using the following Python code.

import numpy as np
from math import erf, exp, sqrt
from scipy.optimize import least_squares
  1. Extended A&S approximation:
  2. erf ≈ 1 − t * exp *
  3. where now
  4. t = 1 /
  5. We fit parameters p1, p2, a1..a7 over x in .
def approx_erf:
p1 = params
p2 = params
a = params
t = 1.0 /
poly = np.zeros_like
tt = np.ones_like # t^0
# polynomial: a1*t^0 + a2*t^1 +... + a7*t^6
for ak in a:
poly += ak * tt
tt *= t
return 1.0 - t * np.exp * poly
def residuals:
return approx_erf - ys
  1. Prepare data for fitting
N = 300
xmin = 0
xmax = 10
xs = np.linspace
ys = np.array
  1. Initial guess for parameters
  2. Start from original A&S values and extend them conservatively
p1_0 = 0.3275911 # original A&S p
p2_0 = 0.0 # new denominator parameter
  1. original A&S 5 coefficients, add two => 7 in total
a0 =
p2_fit = params
a_fit = params
  1. Print fitted parameters
print
print
print
for i, ai in enumerate:
print
  1. Evaluate approximation error
approx_vals = approx_erf
abs_err = np.abs
print
print
print

All of these approximations are valid for. To use these approximations for negative, use the fact that is an odd function, so.
Exponential bounds and a pure exponential approximation for the complementary error function are given by
The above have been generalized to sums of exponentials with increasing accuracy in terms of so that can be accurately approximated or bounded by, where
In particular, there is a systematic methodology to solve the numerical coefficients that yield a minimax approximation or bound for the closely related Q-function:,, or for. The coefficients for many variations of the exponential approximations and bounds up to have been released to open access as a comprehensive dataset.
A tight approximation of the complementary error function for is given by Karagiannidis & Lioumpas, who showed for the appropriate choice of parameters that
They determined, which gave a good approximation for all. Alternative coefficients are also available for tailoring accuracy for a specific application or transforming the expression into a tight bound.
A single-term lower bound is
where the parameter can be picked to minimize error on the desired interval of approximation.
Another approximation is given by Sergei Winitzki using his "global Padé approximations":
where
This is designed to be very accurate in the neighborhoods of 0 and infinity, and the relative error is less than 0.00035 for all real. Using the alternate value reduces the maximum relative error to about 0.00013.
The extended "global Pade" approximation,
provides a maximum error of about, as demonstrated by the following Python script.

import numpy,math
from scipy.optimize import least_squares
  1. approximation to erf
def approx_erf:
frac=/
return numpy.sign*numpy.sqrt
def residuals:
return approx_erf - ys
  1. data for fitting
N = 200
xmin = 0
xmax = 9
xs = numpy.linspace
ys = numpy.array
params0 = numpy.array
  1. fitting
result = least_squares,
xtol=1e-14, ftol=1e-14, gtol=1e-14, max_nfev=5000
params = result.x
  1. print out fitted parameters
print
for i, pi in enumerate:
print
  1. evaluate approximation error
approx_vals = approx_erf
abs_err = numpy.abs
print
print
print

Winitzki's approximation can be inverted to obtain an approximation for the inverse error function:
An approximation with a maximal error of for any real argument is:
An approximation of with a maximum relative error less than in absolute value is:
for
and for
A simple approximation for real-valued arguments can be done through hyperbolic functions:
which keeps the absolute difference
Since the error function and the Gaussian Q-function are closely related through the identity or equivalently, bounds developed for the Q-function can be adapted to approximate the complementary error function. A pair of tight lower and upper bounds on the Gaussian Q-function for positive arguments was introduced by Abreu based on a simple algebraic expression with only two exponential terms:
These bounds stem from a unified form where the parameters and are selected to ensure the bounding properties: for the lower bound, and, and for the upper bound, and.
These expressions maintain simplicity and tightness, providing a practical trade-off between accuracy and ease of computation. They are particularly valuable in theoretical contexts, such as communication theory over fading channels, where both functions frequently appear. Additionally, the original Q-function bounds can be extended to for positive integers via the binomial theorem, suggesting potential adaptability for powers of, though this is less commonly required in error function applications.