Binomial distribution


In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success or failure. A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process. For a single trial, that is, when, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the binomial test of statistical significance.
The binomial distribution is frequently used to model the number of successes in a sample of size drawn with replacement from a population of size. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for much larger than, the binomial distribution remains a good approximation, and is widely used.

Definitions

Probability mass function

If the random variable follows the binomial distribution with parameters and, we write. The probability of getting exactly successes in independent Bernoulli trials is given by the probability mass function:
for, where
is the binomial coefficient. The formula can be understood as follows: is the probability of obtaining the sequence of independent Bernoulli trials in which trials are "successes" and the remaining trials are "failures". Since the trials are independent with probabilities remaining constant between them, any sequence of trials with successes has the same probability of being achieved. There are such sequences, since the binomial coefficient counts the number of ways to choose the positions of the successes among the trials. The binomial distribution is concerned with the probability of obtaining any of these sequences, meaning the probability of obtaining one of them must be added times, hence.
In creating reference tables for binomial distribution probability, usually, the table is filled in up to values. This is because for, the probability can be calculated by its complement as
Looking at the expression as a function of, there is a value that maximizes it. This value can be found by calculating
and comparing it to 1. There is always an integer that satisfies
is monotone increasing for and monotone decreasing for, with the exception of the case where is an integer. In this case, there are two values for which is maximal: and. is the most probable outcome of the Bernoulli trials and is called the mode.
Equivalently,. Taking the floor function, we obtain.

Example

Suppose a biased coin comes up heads with probability 0.3 when tossed. The probability of seeing exactly 4 heads in 6 tosses is

Cumulative distribution function

The cumulative distribution function can be expressed as:
where is the "floor" under ; that is, the greatest integer less than or equal to.
It can also be represented in terms of the regularized incomplete beta function, as follows:
which is equivalent to the cumulative distribution functions of the beta distribution and of the -distribution:
Some closed-form bounds for the cumulative distribution function are given [|below].

Properties

Expected value and variance

If, that is, is a binomially distributed random variable, being the total number of experiments and the probability of each experiment yielding a successful result, then the expected value of is:
This follows from the linearity of the expected value along with the fact that is the sum of identical Bernoulli random variables, each with expected value. In other words, if are identical Bernoulli random variables with parameter, then and
The variance is:
This similarly follows from the fact that the variance of a sum of independent random variables is the sum of the variances.

Higher moments

The first 6 central moments, defined as, are given by
The non-central moments satisfy
and in general
where are the Stirling numbers of the second kind, and is the -th falling power of.
A simple bound
follows by bounding the Binomial moments via the higher Poisson moments:
This shows that if, then is at most a constant factor away from.
The moment-generating function is.

Mode

Usually the mode of a binomial distribution is equal to, where is the floor function. However, when is an integer and is neither 0 nor 1, then the distribution has two modes: and. When is equal to 0 or 1, the mode will be 0 and correspondingly. These cases can be summarized as follows:
Proof: Let
For only has a nonzero value with. For we find and for. This proves that the mode is 0 for and for.
Let. We find
From this follows
So when is an integer, then and is a mode. In the case that, then only is a mode.

Median

In general, there is no single formula to find the median for a binomial distribution, and it may even be non-unique. However, several special results have been established:
  • If is an integer, then the mean, median, and mode coincide and equal.
  • Any median must lie within the interval.
  • A median cannot lie too far away from the mean:.
  • The median is unique and equal to when .
  • When is a rational number, the median is unique.
  • When and is odd, any number in the interval is a median of the binomial distribution. If and is even, then is the unique median.

    Tail bounds

For, upper bounds can be derived for the lower tail of the cumulative distribution function, the probability that there are at most successes. Since, these bounds can also be seen as bounds for the upper tail of the cumulative distribution function for.
Hoeffding's inequality yields the simple bound
which is however not very tight. In particular, for, we have that , but Hoeffding's bound evaluates to a positive constant.
A sharper bound can be obtained from the Chernoff bound:
where is the relative entropy between an -coin and a -coin :
Asymptotically, this bound is reasonably tight; see for details.
One can also obtain lower bounds on the tail, known as anti-concentration bounds. By approximating the binomial coefficient with Stirling's formula it can be shown that
which implies the simpler but looser bound
For and for even, it is possible to make the denominator constant:

Statistical inference

Estimation of parameters

When is known, the parameter can be estimated using the proportion of successes:
This estimator is found using maximum likelihood estimator and also the method of moments. This estimator is unbiased and uniformly with minimum variance, proven using Lehmann–Scheffé theorem, since it is based on a minimal sufficient and complete statistic. It is also consistent both in probability and in MSE. This statistic is asymptotically normal thanks to the central limit theorem, because it is the same as taking the mean over Bernoulli samples. It has a variance of, a property which is used in various ways, such as in Wald's confidence intervals.
A closed form Bayes estimator for also exists when using the Beta distribution as a conjugate prior distribution. When using a general as a prior, the posterior mean estimator is:
The Bayes estimator is asymptotically efficient and as the sample size approaches infinity, it approaches the MLE solution. The Bayes estimator is biased, admissible and consistent in probability. Using the Bayesian estimator with the Beta distribution can be used with Thompson sampling.
For the special case of using the standard uniform distribution as a non-informative prior,, the posterior mean estimator becomes:
This method is called the rule of succession, which was introduced in the 18th century by Pierre-Simon Laplace.
When relying on Jeffreys prior, the prior is, which leads to the estimator:
When estimating with very rare events and a small , then using the standard estimator leads to which sometimes is unrealistic and undesirable. In such cases there are various alternative estimators. One way is to use the Bayes estimator, leading to:
Another method is to use the upper bound of the confidence interval obtained using the rule of three:

Confidence intervals for the parameter p

Even for quite large values of, the actual distribution of the mean is significantly nonnormal. Because of this problem several methods to estimate confidence intervals have been proposed.
In the equations for confidence intervals below, the variables have the following meaning:
A continuity correction of may be added.

Agresti–Coull method

Here the estimate of is modified to
This method works well for and. See here for. For use the Wilson method below.

Arcsine method

Wilson (score) method

The notation in the formula below differs from the previous formulas in two respects:
  • Firstly, has a slightly different interpretation in the formula below: it has its ordinary meaning of 'the th quantile of the standard normal distribution', rather than being a shorthand for 'the th quantile'.
  • Secondly, this formula does not use a plus-minus to define the two bounds. Instead, one may use to get the lower bound, or use to get the upper bound. For example: for a 95% confidence level the error, so one gets the lower bound by using, and one gets the upper bound by using.

    Comparison

The so-called "exact" method is the most conservative.
The Wald method, although commonly recommended in textbooks, is the most biased.