Normal distribution
In probability theory, a normal distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
The parameter is the mean or expectation of the distribution ; and is its standard deviation. The variance of the distribution is. A random variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate.
Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples of a random variable with finite mean and variance is itself a random variable whose distribution converges to a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes often have distributions that are nearly normal.
Moreover, Gaussian distributions have some unique properties that are valuable in analytic studies. For instance, any linear combination of a fixed collection of normal deviates is a normal deviate. Many results and methods can be derived analytically in explicit form when the relevant variables are normally distributed.
A normal distribution is sometimes informally called a bell curve. However, many other distributions are bell-shaped.
Definitions
Standard normal distribution
The simplest case of a normal distribution is known as the standard normal distribution. This is a special case when and, and it is described by this probability density function:The factor in this expression ensures that the total area under the curve is equal to one. The factor in the exponent ensures that the distribution has unit variance, and therefore also unit standard deviation. This function is symmetric around, where it attains its maximum value and has inflection points at and.
Authors differ on which normal distribution should be called the "standard" one. Carl Friedrich Gauss defined the standard normal as having variance, that is
Stigler goes even further, defining the standard normal with variance :
General normal distribution
Every normal distribution is a version of the standard normal distribution whose domain has been stretched by a factor and then translated by :The probability density must be scaled by so that the integral is still 1.
If is a standard normal deviate, then will have a normal distribution with expected value and standard deviation. Conversely, if is a normal deviate with parameters and, then will have a standard normal distribution. This variate is called the standardized form of.
Notation
The probability density of the standard Gaussian distribution is often denoted with the Greek letter . The alternative form of the Greek letter phi,, is also used quite often.The normal distribution is often referred to as or. Thus when a random variable is distributed normally with mean and variance, one may write
Alternative parameterizations
Some authors advocate using the precision as the parameter defining the width of the distribution, instead of the deviation or the variance. The precision is normally defined as the reciprocal of the variance,. The formula for the distribution then becomesThis choice is claimed to have advantages in numerical computations when is very close to zero and simplify formulas in some contexts, such as in the Bayesian inference of variables with multivariate normal distribution.
Also the reciprocal of the standard deviation might be defined as the precision and the expression of the normal distribution becomes
According to Stigler, this formulation is advantageous because of a much simpler and easier-to-remember formula, and simple approximate formulas for the quantiles of the distribution.
Normal distributions form an exponential family with natural parameters and, and natural statistics x and x^{2}. The dual, expectation parameters for normal distribution are and.
Cumulative distribution function
The cumulative distribution function of the standard normal distribution, usually denoted with the capital Greek letter , is the integralThe related error function gives the probability of a random variable with normal distribution of mean 0 and variance 1/2 falling in the range ; that is
These integrals cannot be expressed in terms of elementary functions, and are often said to be special functions. However, many numerical approximations are known; see [|below].
The two functions are closely related, namely
For a generic normal distribution with density, mean and deviation, the cumulative distribution function is
The complement of the standard normal CDF,, is often called the Q-function, especially in engineering texts. It gives the probability that the value of a standard normal random variable will exceed :. Other definitions of the -function, all of which are simple transformations of, are also used occasionally.
The graph of the standard normal CDF has 2-fold rotational symmetry around the point ; that is,. Its antiderivative is
The CDF of the standard normal distribution can be expanded by Integration by parts into a series:
where denotes the double factorial.
An asymptotic expansion of the CDF for large x can also be derived using integration by parts; see Error function#Asymptotic expansion.
Standard deviation and coverage
About 68% of values drawn from a normal distribution are within one standard deviation σ away from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations. This fact is known as the 68-95-99.7 rule, or the 3-sigma rule.More precisely, the probability that a normal deviate lies in the range between and is given by
To 12 significant figures, the values for are:
OEIS | - | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
1 | For large one can approximate. Quantile functionThe quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:For a normal random variable with mean and variance, the quantile function is The quantile of the standard normal distribution is commonly denoted as. These values are used in hypothesis testing, construction of confidence intervals and Q-Q plots. A normal random variable will exceed with probability, and will lie outside the interval with probability. In particular, the quantile is 1.96; therefore a normal random variable will lie outside the interval in only 5% of cases. The following table gives the quantile such that will lie in the range with a specified probability. These values are useful to determine tolerance interval for sample averages and other statistical estimators with normal distributions:. NOTE: the following table shows, not as defined above.
For small, the quantile function has the useful asymptotic expansion PropertiesThe normal distribution is the only distribution whose cumulants beyond the first two are zero. It is also the continuous distribution with the maximum entropy for a specified mean and variance. Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.The normal distribution is a subclass of the elliptical distributions. The normal distribution is symmetric about its mean, and is non-zero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly skewed, such as the weight of a person or the price of a share. Such variables may be better described by other distributions, such as the log-normal distribution or the Pareto distribution. The value of the normal distribution is practically zero when the value lies more than a few standard deviations away from the mean. Therefore, it may not be an appropriate model when one expects a significant fraction of outliers—values that lie many standard deviations away from the mean—and least squares and other statistical inference methods that are optimal for normally distributed variables often become highly unreliable when applied to such data. In those cases, a more heavy-tailed distribution should be assumed and the appropriate robust statistical inference methods applied. The Gaussian distribution belongs to the family of stable distributions which are the attractors of sums of independent, identically distributed distributions whether or not the mean or variance is finite. Except for the Gaussian which is a limiting case, all stable distributions have heavy tails and infinite variance. It is one of the few distributions that are stable and that have probability density functions that can be expressed analytically, the others being the Cauchy distribution and the Lévy distribution. Symmetries and derivativesThe normal distribution with density has the following properties:
If has a normal distribution, these moments exist and are finite for any whose real part is greater than −1. For any non-negative integer, the plain central moments are: Here denotes the double factorial, that is, the product of all numbers from to 1 that have the same parity as The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any non-negative integer The last formula is valid also for any non-integer When the mean the plain and absolute moments can be expressed in terms of confluent hypergeometric functions and These expressions remain valid even if is not integer. See also generalized Hermite polynomials.
The expectation of conditioned on the event that lies in an interval is given by where and respectively are the density and the cumulative distribution function of. For this is known as the inverse Mills ratio. Note that above, density of is used instead of standard normal density as in inverse Mills ratio, so here we have instead of. Fourier transform and characteristic functionThe Fourier transform of a normal density with mean and standard deviation iswhere is the imaginary unit. If the mean, the first factor is 1, and the Fourier transform is, apart from a constant factor, a normal density on the frequency domain, with mean 0 and standard deviation. In particular, the standard normal distribution is an eigenfunction of the Fourier transform. In probability theory, the Fourier transform of the probability distribution of a real-valued random variable is closely connected to the characteristic function of that variable, which is defined as the expected value of, as a function of the real variable . This definition can be analytically extended to a complex-value variable. The relation between both is: Moment and cumulant generating functionsThe moment generating function of a real random variable is the expected value of, as a function of the real parameter. For a normal distribution with density, mean and deviation, the moment generating function exists and is equal toThe cumulant generating function is the logarithm of the moment generating function, namely Since this is a quadratic polynomial in, only the first two cumulants are nonzero, namely the mean and the variance . Stein operator and classWithin Stein's method the Stein operator and class of a random variable are and the class of all absolutely continuous functions.Zero-variance limitIn the limit when tends to zero, the probability density eventually tends to zero at any, but grows without limit if, while its integral remains equal to 1. Therefore, the normal distribution cannot be defined as an ordinary function when.However, one can define the normal distribution with zero variance as a generalized function; specifically, as Dirac's "delta function" translated by the mean, that is Its CDF is then the Heaviside step function translated by the mean, namely Maximum entropyOf all probability distributions over the reals with a specified mean and variance , the normal distribution is the one with maximum entropy. If is a continuous random variable with probability density, then the entropy of is defined aswhere is understood to be zero whenever. This functional can be maximized, subject to the constraints that the distribution is properly normalized and has a specified variance, by using variational calculus. A function with two Lagrange multipliers is defined: where is, for now, regarded as some density function with mean and standard deviation. At maximum entropy, a small variation about will produce a variation about which is equal to 0: Since this must hold for any small, the term in brackets must be zero, and solving for yields: Using the constraint equations to solve for and yields the density of the normal distribution: The entropy of a normal distribution equals to Operations on normal deviatesThe family of normal distributions is closed under linear transformations: if is normally distributed with mean and standard deviation, then the variable , for any real numbers and, is also normally distributed, withmean and standard deviation. Also if and are two independent normal random variables, with means, and standard deviations,, then their sum will also be normally distributed,^{sum of normally distributed random variables|} with mean and variance. In particular, if and are independent normal deviates with zero mean and variance, then and are also independent and normally distributed, with zero mean and variance. This is a special case of the polarization identity. Also, if, are two independent normal deviates with mean and deviation, and, are arbitrary real numbers, then the variable is also normally distributed with mean and deviation. It follows that the normal distribution is stable. More generally, any linear combination of independent normal deviates is a normal deviate. Infinite divisibility and Cramér's theoremFor any positive integer, any normal distribution with mean and variance is the distribution of the sum of independent normal deviates, each with mean and variance. This property is called infinite divisibility.Conversely, if and are independent random variables and their sum has a normal distribution, then both and must be normal deviates. This result is known as Cramér’s decomposition theorem, and is equivalent to saying that the convolution of two distributions is normal if and only if both are normal. Cramér's theorem implies that a linear combination of independent non-Gaussian variables will never have an exactly normal distribution, although it may approach it arbitrarily closely. Bernstein's theoremBernstein's theorem states that if and are independent and and are also independent, then both X and Y must necessarily have normal distributions.More generally, if are independent random variables, then two distinct linear combinations and will be independent if and only if all are normal and, where denotes the variance of. Other propertiesRelated distributionsCentral limit theoremThe central limit theorem states that under certain conditions, the sum of many random variables will have an approximately normal distribution. More specifically, where are independent and identically distributed random variables with the same arbitrary distribution, zero mean, and variance and is theirmean scaled by Then, as increases, the probability distribution of will tend to the normal distribution with zero mean and variance. The theorem can be extended to variables that are not independent and/or not identically distributed if certain constraints are placed on the degree of dependence and the moments of the distributions. Many test statistics, scores, and estimators encountered in practice contain sums of certain random variables in them, and even more estimators can be represented as sums of random variables through the use of influence functions. The central limit theorem implies that those statistical parameters will have asymptotically normal distributions. The central limit theorem also implies that certain distributions can be approximated by the normal distribution, for example:
A general upper bound for the approximation error in the central limit theorem is given by the Berry–Esseen theorem, improvements of the approximation are given by the Edgeworth expansions. Operations on a single random variableIf X is distributed normally with mean μ and variance σ^{2}, then
ExtensionsThe notion of normal distribution, being one of the most important distributions in probability theory, has been extended far beyond the standard framework of the univariate case. All these extensions are also called normal or Gaussian laws, so a certain ambiguity in names exists.
where μ is the mean and σ_{1} and σ_{2} are the standard deviations of the distribution to the left and right of the mean respectively. The mean, variance and third central moment of this distribution have been determined where E, V and T are the mean, variance, and third central moment respectively. One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables encountered in practice. In such case a possible extension would be a richer family of distributions, having more than two parameters and therefore being able to fit the empirical distribution more accurately. The examples of such extensions are:
Estimation of parametersIt is often the case that we don't know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample from a normal population we would like to learn the approximate values of parameters and. The standard approach to this problem is the maximum likelihood method, which requires maximization of the log-likelihood function:Taking derivatives with respect to and and solving the resulting system of first order conditions yields the maximum likelihood estimates: Sample meanEstimator is called the sample mean, since it is the arithmetic mean of all observations. The statistic is complete and sufficient for, and therefore by the Lehmann–Scheffé theorem, is the uniformly minimum variance unbiased estimator. In finite samples it is distributed normally:The variance of this estimator is equal to the μμ-element of the inverse Fisher information matrix. This implies that the estimator is finite-sample efficient. Of practical importance is the fact that the standard error of is proportional to, that is, if one wishes to decrease the standard error by a factor of 10, one must increase the number of points in the sample by a factor of 100. This fact is widely used in determining sample sizes for opinion polls and the number of trials in Monte Carlo simulations. From the standpoint of the asymptotic theory, is consistent, that is, it converges in probability to as. The estimator is also asymptotically normal, which is a simple corollary of the fact that it is normal in finite samples: Sample varianceThe estimator is called the sample variance, since it is the variance of the sample. In practice, another estimator is often used instead of the. This other estimator is denoted , and is also called the sample variance, which represents a certain ambiguity in terminology; its square root is called the sample standard deviation. The estimator differs from by having instead of n in the denominator :The difference between and becomes negligibly small for large ns. In finite samples however, the motivation behind the use of is that it is an unbiased estimator of the underlying parameter, whereas is biased. Also, by the Lehmann–Scheffé theorem the estimator is uniformly minimum variance unbiased, which makes it the "best" estimator among all unbiased ones. However it can be shown that the biased estimator is "better" than the in terms of the mean squared error criterion. In finite samples both and have scaled chi-squared distribution with degrees of freedom: The first of these expressions shows that the variance of is equal to, which is slightly greater than the σσ-element of the inverse Fisher information matrix. Thus, is not an efficient estimator for, and moreover, since is UMVU, we can conclude that the finite-sample efficient estimator for does not exist. Applying the asymptotic theory, both estimators and are consistent, that is they converge in probability to as the sample size. The two estimators are also both asymptotically normal: In particular, both estimators are asymptotically efficient for . Confidence intervalsBy Cochran's theorem, for normal distributions the sample mean and the sample variance s^{2} are independent, which means there can be no gain in considering their joint distribution. There is also a converse theorem: if in a sample the sample mean and sample variance are independent, then the sample must have come from the normal distribution. The independence between and s can be employed to construct the so-called t-statistic:This quantity t has the Student's t-distribution with degrees of freedom, and it is an ancillary statistic. Inverting the distribution of this t-statistics will allow us to construct the confidence interval for μ; similarly, inverting the χ^{2} distribution of the statistic s^{2} will give us the confidence interval for σ^{2}: where t_{k,p} and are the pth quantiles of the t- and χ^{2}-distributions respectively. These confidence intervals are of the confidence level, meaning that the true values μ and σ^{2} fall outside of these intervals with probability α. In practice people usually take, resulting in the 95% confidence intervals. The approximate formulas in the display above were derived from the asymptotic distributions of and s^{2}. The approximate formulas become valid for large values of n, and are more convenient for the manual calculation since the standard normal quantiles z_{α/2} do not depend on n. In particular, the most popular value of, results in. Normality testsNormality tests assess the likelihood that the given data set comes from a normal distribution. Typically the null hypothesis H_{0} is that the observations are distributed normally with unspecified mean μ and variance σ^{2}, versus the alternative H_{a} that the distribution is arbitrary. Many tests have been devised for this problem, the more prominent of them are outlined below:
Sum of two quadraticsScalar formThe following auxiliary formula is useful for simplifying the posterior update equations, which otherwise become fairly tedious.This equation rewrites the sum of two quadratics in x by expanding the squares, grouping the terms in x, and completing the square. Note the following about the complex constant factors attached to some of the terms:
where Note that the form x′ A x is called a quadratic form and is a scalar: In other words, it sums up all possible combinations of products of pairs of elements from x, with a separate coefficient for each. In addition, since, only the sum matters for any off-diagonal elements of A, and there is no loss of generality in assuming that A is symmetric. Furthermore, if A is symmetric, then the form Sum of differences from the meanAnother useful formula is as follows:where With known varianceFor a set of i.i.d. normally distributed data points X of size n where each individual point x follows with known variance σ^{2}, the conjugate prior distribution is also normally distributed.This can be shown more easily by rewriting the variance as the precision, i.e. using τ = 1/σ^{2}. Then if and we proceed as follows. First, the likelihood function is : Then, we proceed as follows: In the above derivation, we used the formula above for the sum of two quadratics and eliminated all constant factors not involving μ. The result is the kernel of a normal distribution, with mean and precision, i.e. This can be written as a set of Bayesian update equations for the posterior parameters in terms of the prior parameters: That is, to combine n data points with total precision of nτ and mean of values, derive a new total precision simply by adding the total precision of the data to the prior total precision, and form a new mean through a precision-weighted average, i.e. a weighted average of the data mean and the prior mean, each weighted by the associated total precision. This makes logical sense if the precision is thought of as indicating the certainty of the observations: In the distribution of the posterior mean, each of the input components is weighted by its certainty, and the certainty of this distribution is the sum of the individual certainties. The above formula reveals why it is more convenient to do Bayesian analysis of conjugate priors for the normal distribution in terms of the precision. The posterior precision is simply the sum of the prior and likelihood precisions, and the posterior mean is computed through a precision-weighted average, as described above. The same formulas can be written in terms of variance by reciprocating all the precisions, yielding the more ugly formulas With known meanFor a set of i.i.d. normally distributed data points X of size n where each individual point x follows with known mean μ, the conjugate prior of the variance has an inverse gamma distribution or a scaled inverse chi-squared distribution. The two are equivalent except for having different parameterizations. Although the inverse gamma is more commonly used, we use the scaled inverse chi-squared for the sake of convenience. The prior for σ^{2} is as follows:The likelihood function from above, written in terms of the variance, is: where Then: The above is also a scaled inverse chi-squared distribution where or equivalently Reparameterizing in terms of an inverse gamma distribution, the result is: With unknown mean and unknown varianceFor a set of i.i.d. normally distributed data points X of size n where each individual point x follows with unknown mean μ and unknown variance σ^{2}, a combined conjugate prior is placed over the mean and variance, consisting of a normal-inverse-gamma distribution.Logically, this originates as follows:
The update equations can be derived, and look as follows: The respective numbers of pseudo-observations add the number of actual observations to them. The new mean hyperparameter is once again a weighted average, this time weighted by the relative numbers of observations. Finally, the update for is similar to the case with known mean, but in this case the sum of squared deviations is taken with respect to the observed data mean rather than the true mean, and as a result a new "interaction term" needs to be added to take care of the additional error source stemming from the deviation between prior and data mean. The prior distributions are Therefore, the joint prior is The likelihood function from the section above with known variance is: Writing it in terms of variance rather than precision, we get: where Therefore, the posterior is : In other words, the posterior distribution has the form of a product of a normal distribution over p times an inverse gamma distribution over p, with parameters that are the same as the update equations above. Occurrence and applicationsThe occurrence of normal distribution in practical problems can be loosely classified into four categories:
Computational methodsGenerating values from normal distributionIn computer simulations, especially in applications of the Monte-Carlo method, it is often desirable to generate values that are normally distributed. The algorithms listed below all generate the standard normal deviates, since a can be generated as, where Z is standard normal. All these algorithms rely on the availability of a random number generator U capable of producing uniform random variates.
The values Φ may be approximated very accurately by a variety of methods, such as numerical integration, Taylor series, asymptotic series and continued fractions. Different approximations are used depending on the desired level of accuracy. Shore introduced simple approximations that may be incorporated in stochastic optimization models of engineering and operations research, like reliability engineering and inventory analysis. Denoting p=Φ, the simplest approximation for the quantile function is: This approximation delivers for z a maximum absolute error of 0.026. For p < 1/2 replace p by 1 − p and change sign. Another approximation, somewhat less accurate, is the single-parameter approximation: The latter had served to derive a simple approximation for the loss integral of the normal distribution, defined by This approximation is particularly accurate for the right far-tail. Highly accurate approximations for the CDF, based on Response Modeling Methodology, are shown in Shore. Some more approximations can be found at: Error function#Approximation with elementary functions. In particular, small relative error on the whole domain for the CDF and the quantile function as well, is achieved via an explicitly invertible formula by Sergei Winitzki in 2008. HistoryDevelopmentSome authors attribute the credit for the discovery of the normal distribution to de Moivre, who in 1738 published in the second edition of his "The Doctrine of Chances" the study of the coefficients in the binomial expansion of. De Moivre proved that the middle term in this expansion has the approximate magnitude of, and that "If m or ½n be a Quantity infinitely great, then the Logarithm of the Ratio, which a Term distant from the middle by the Interval ℓ, has to the middle Term, is." Although this theorem can be interpreted as the first obscure expression for the normal probability law, Stigler points out that de Moivre himself did not interpret his results as anything more than the approximate rule for the binomial coefficients, and in particular de Moivre lacked the concept of the probability density function.discovered the normal distribution in 1809 as a way to rationalize the method of least squares. In 1809 Gauss published his monograph "Theoria motus corporum coelestium in sectionibus conicis solem ambientium" where among other things he introduces several important statistical concepts, such as the method of least squares, the method of maximum likelihood, and the normal distribution. Gauss used M,, to denote the measurements of some unknown quantity V, and sought the "most probable" estimator of that quantity: the one that maximizes the probability of obtaining the observed experimental results. In his notation φΔ is the probability law of the measurement errors of magnitude Δ. Not knowing what the function φ is, Gauss requires that his method should reduce to the well-known answer: the arithmetic mean of the measured values. Starting from these principles, Gauss demonstrates that the only law that rationalizes the choice of arithmetic mean as an estimator of the location parameter, is the normal law of errors: where h is "the measure of the precision of the observations". Using this normal law as a generic model for errors in the experiments, Gauss formulates what is now known as the non-linear weighted least squares method. in 1810, consolidating the importance of the normal distribution in statistics. Although Gauss was the first to suggest the normal distribution law, Laplace made significant contributions. It was Laplace who first posed the problem of aggregating several observations in 1774, although his own solution led to the Laplacian distribution. It was Laplace who first calculated the value of the integral in 1782, providing the normalization constant for the normal distribution. Finally, it was Laplace who in 1810 proved and presented to the Academy the fundamental central limit theorem, which emphasized the theoretical importance of the normal distribution. It is of interest to note that in 1809 an Irish mathematician Adrain published two derivations of the normal probability law, simultaneously and independently from Gauss. His works remained largely unnoticed by the scientific community, until in 1871 they were "rediscovered" by Abbe. In the middle of the 19th century Maxwell demonstrated that the normal distribution is not just a convenient mathematical tool, but may also occur in natural phenomena: "The number of particles whose velocity, resolved in a certain direction, lies between x and x + dx is NamingSince its introduction, the normal distribution has been known by many different names: the law of error, the law of facility of errors, Laplace's second law, Gaussian law, etc. Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthogonal rather than "usual". However, by the end of the 19th century some authors had started using the name normal distribution, where the word "normal" was used as an adjective – the term now being seen as a reflection of the fact that this distribution was seen as typical, common – and thus "normal". Peirce once defined "normal" thus: "...the 'normal' is not the average of what actually occurs, but of what would, in the long run, occur under certain circumstances." Around the turn of the 20th century Pearson popularized the term normal as a designation for this distribution.Also, it was Pearson who first wrote the distribution in terms of the standard deviation σ as in modern notation. Soon after this, in year 1915, Fisher added the location parameter to the formula for normal distribution, expressing it in the way it is written nowadays: The term "standard normal", which denotes the normal distribution with zero mean and unit variance came into general use around the 1950s, appearing in the popular textbooks by P.G. Hoel "Introduction to mathematical statistics" and A.M. Mood "Introduction to the theory of statistics". When the name is used, the "Gaussian distribution" was named after Carl Friedrich Gauss, who introduced the distribution in 1809 as a way of rationalizing the method of least squares as outlined above. Among English speakers, both "normal distribution" and "Gaussian distribution" are in common use, with different terms preferred by different communities. Citations |