Normal distribution


In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
The parameter is the mean or expectation of the distribution, while the parameter is the variance. The standard deviation of the distribution is . A random variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate.
Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples of a random variable with finite mean and variance is itself a random variable—whose distribution converges to a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such as measurement errors, often have distributions that are nearly normal.
Moreover, Gaussian distributions have some unique properties that are valuable in analytic studies. For instance, any linear combination of a fixed collection of independent normal deviates is a normal deviate. Many results and methods, such as propagation of uncertainty and least squares parameter fitting, can be derived analytically in explicit form when the relevant variables are normally distributed.
A normal distribution is sometimes informally called a bell curve. However, many other distributions are bell-shaped.
The univariate probability distribution is generalized for vectors in the multivariate normal distribution and for matrices in the matrix normal distribution.

Definitions

Standard normal distribution

The simplest case of a normal distribution is known as the standard normal distribution or unit normal distribution. This is a special case when and, and it is described by this probability density function :
The variable has a mean of 0 and a variance and standard deviation of 1. The density has its peak at and inflection points at and.
Although the density above is most commonly known as the standard normal, a few authors have used that term to describe other versions of the normal distribution. Carl Friedrich Gauss, for example, once defined the standard normal as
which has a variance of, and Stephen Stigler once defined the standard normal as
which has a simple functional form and a variance of

General normal distribution

Every normal distribution is a version of the standard normal distribution whose domain has been stretched by a factor and then translated by :
The probability density must be scaled by so that the integral is still 1.
If is a standard normal deviate, then will have a normal distribution with expected value and standard deviation. This is equivalent to saying that the standard normal distribution can be scaled/stretched by a factor of and shifted by to yield a different normal distribution, called. Conversely, if is a normal deviate with parameters and, then this distribution can be re-scaled and shifted via the formula to convert it to the standard normal distribution. This variate is also called the standardized form of.

Notation

The probability density of the standard Gaussian distribution is often denoted with the Greek letter . The alternative form of the Greek letter phi,, is also used quite often.
The normal distribution is often referred to as or. Thus when a random variable is normally distributed with mean and standard deviation, one may write

Alternative parameterizations

Some authors advocate using the precision as the parameter defining the width of the distribution, instead of the standard deviation or the variance. The precision is normally defined as the reciprocal of the variance,. The formula for the distribution then becomes
This choice is claimed to have advantages in numerical computations when is very close to zero, and simplifies formulas in some contexts, such as in the Bayesian inference of variables with multivariate normal distribution.
Alternatively, the reciprocal of the standard deviation might be defined as the precision, in which case the expression of the normal distribution becomes
According to Stigler, this formulation is advantageous because of a much simpler and easier-to-remember formula, and simple approximate formulas for the quantiles of the distribution.
Normal distributions form an exponential family with natural parameters and, and natural statistics and. The dual expectation parameters for normal distribution are and.

Cumulative distribution function

The cumulative distribution function of the standard normal distribution, usually denoted with the capital Greek letter, is the integral

Error function

The related error function gives the probability of a random variable, with normal distribution of mean 0 and variance 1/2, falling in the range. That is:
These integrals cannot be expressed in terms of elementary functions, and are often said to be special functions. However, many numerical approximations are known; see [|below] for more.
The two functions are closely related, namely
For a generic normal distribution with density, mean and variance, the cumulative distribution function is
The probability that lies in the semi-closed interval, where, is therefore
The complement of the standard normal cumulative distribution function,, is often called the Q-function, especially in engineering texts. It gives the probability that the value of a standard normal random variable will exceed :. Other definitions of the -function, all of which are simple transformations of, are also used occasionally.
The graph of the standard normal cumulative distribution function has 2-fold rotational symmetry around the point ; that is,. Its antiderivative can be expressed as follows:
The cumulative distribution function of the standard normal distribution can be expanded by integration by parts into a series:
where denotes the double factorial.
An asymptotic expansion of the cumulative distribution function for large can also be derived using integration by parts. For more, see.
A quick approximation to the standard normal distribution's cumulative distribution function can be found by using a Taylor series approximation:

Recursive computation with Taylor series expansion

The recursive nature of the family of derivatives may be used to easily construct a rapidly converging Taylor series expansion using recursive entries about any point of known value of the distribution,:
where:

Using the Taylor series and Newton's method for the inverse function

An application for the above Taylor series expansion is to use Newton's method to reverse the computation. That is, if we have a value for the cumulative distribution function,, but do not know the x needed to obtain the, we can use Newton's method to find x, and use the Taylor series expansion above to minimize the number of computations. Newton's method is ideal to solve this problem because the first derivative of, which is an integral of the normal standard distribution, is the normal standard distribution, and is readily available to use in the Newton's method solution.
To solve, select a known approximate solution,, to the desired. may be a value from a distribution table, or an intelligent estimate followed by a computation of using any desired means to compute. Use this value of and the Taylor series expansion above to minimize computations.
Repeat the following process until the difference between the computed and the desired, which we will call, is below a chosen acceptably small error, such as 10, 10, etc.:
where
When the repeated computations converge to an error below the chosen acceptably small value, will be the value needed to obtain a of the desired value,.

Standard deviation and coverage

About 68% of values drawn from a normal distribution are within one standard deviation from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations. This is known as the 68–95–99.7 rule, or the 3-sigma rule.
More precisely, the probability that a normal deviate lies in the range between and is given by
To 12 significant digits, the values for are:
OEIS
1

Quantile function

The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:
For a normal random variable with mean and variance, the quantile function is
The quantile of the standard normal distribution is commonly denoted as. These values are used in hypothesis testing, construction of confidence intervals and Q–Q plots. A normal random variable will exceed with probability, and will lie outside the interval with probability. In particular, the quantile is 1.96; therefore a normal random variable will lie outside the interval in only 5% of cases.
The following table gives the quantile such that will lie in the range with a specified probability. These values are useful to determine tolerance interval for sample averages and other statistical estimators with normal distributions. The following table shows, not as defined above.
0.80 0.999
0.90 0.9999
0.95 0.99999
0.98 0.999999
0.99 0.9999999
0.995 0.99999999
0.998 0.999999999

For small, the quantile function has the useful asymptotic expansion