Miller–Rabin primality test


The Miller–Rabin primality test or Rabin–Miller primality test is a probabilistic primality test: an algorithm which determines whether a given number is likely to be prime, similar to the Fermat primality test and the Solovay–Strassen primality test.
It is of historical significance in the search for a polynomial-time deterministic primality test. Its probabilistic variant remains widely used in practice, as one of the simplest and fastest tests known.
Gary L. Miller discovered the test in 1976. Miller's version of the test is deterministic, but its correctness relies on the unproven extended Riemann hypothesis. Michael O. Rabin modified it to obtain an unconditional probabilistic algorithm in 1980.

Mathematical concepts

Similarly to the Fermat and Solovay–Strassen tests, the Miller–Rabin primality test checks whether a specific property, which is known to hold for prime values, holds for the number under testing.

Strong probable primes

The property is the following. For a given odd integer, let’s write as where is a positive integer and is an odd positive integer. Let’s consider an integer , called a base, which is coprime to.
Then, is said to be a strong probable prime to base a if one of these congruence relations holds:
  • , or
  • for some.
This simplifies to first checking for and then for successive values of. For each value of, the value of the expression may be calculated using the value obtained for the previous value of by squaring under the modulus of.
The idea beneath this test is that when is an odd prime, it passes the test because of two facts:
Hence, by contraposition, if is not a strong probable prime to base, then is definitely composite, and is called a witness for the compositeness of.
However, this property is not an exact characterization of prime numbers. If is composite, it may nonetheless be a strong probable prime to base, in which case it is called a strong pseudoprime, and is a strong liar.

Choices of bases

No composite number is a strong pseudoprime to all bases at the same time. However no simple way of finding a witness is known. A naïve solution is to try all possible bases, which yields an inefficient deterministic algorithm. The Miller test is a more efficient variant of this.
Another solution is to pick a base at random. This yields a fast probabilistic test. When n is composite, most bases are witnesses, so the test will detect n as composite with a reasonably high probability. We can quickly reduce the probability of a false positive to an arbitrarily small rate, by combining the outcome of as many independently chosen bases as necessary to achieve the said rate. This is the Miller–Rabin test. There seems to be diminishing returns in trying many bases, because if n is a pseudoprime to some base, then it seems more likely to be a pseudoprime to another base.
Note that holds trivially for, because the congruence relation is compatible with exponentiation. And holds trivially for since is odd, for the same reason. That is why random are usually chosen in the interval.
For testing arbitrarily large, choosing bases at random is essential, as we don't know the distribution of witnesses and strong liars among the numbers 2, 3,...,.
However, a pre-selected set of a few small bases guarantees the identification of all composites up to a pre-computed maximum. This maximum is generally quite large compared to the bases. This gives very fast deterministic tests for small enough n.

Proofs

Here is a proof that, if n is a prime, then the only square roots of 1 modulo n are 1 and −1.
Here is a proof that, if n is an odd prime, then it is a strong probable prime to base a.

Example

Suppose we wish to determine if is prime. We write, so that we have. We randomly select a number such that.
Say :
Since, either 221 is prime, or 174 is a strong liar for 221.
We try another random, this time choosing :
Hence 137 is a witness for the compositeness of 221, and 174 was in fact a strong liar. Note that this tells us nothing about the factors of 221. However, the example with 341 in a later section shows how these calculations can sometimes produce a factor of n.
For a practical guide to choosing the value of a, see Testing against small sets of bases.

Miller–Rabin test

The algorithm can be written in pseudocode as follows. The parameter k determines the accuracy of the test. The greater the number of rounds, the more accurate the result.
Input #1: n > 2, an odd integer to be tested for primality
Input #2: k, the number of rounds of testing to perform
Output: “composite” if n is found to be composite, “probably prime” otherwise
let s > 0 and d odd > 0 such that n − 1 = 2sd # by factoring out powers of 2 from n − 1
repeat k times:
a ← random # n is always a probable prime to base 1 and n − 1
xad mod n
repeat s times:
yx2 mod n
if y = 1 and x ≠ 1 and xn − 1 then # nontrivial square root of 1 modulo n
returncomposite
xy
if y ≠ 1 then
returncomposite
returnprobably prime

Complexity

Using repeated squaring, the running time of this algorithm is, for an n-digit number, and k is the number of rounds performed; thus this is an efficient, polynomial-time algorithm. FFT-based multiplication, for example the Schönhage–Strassen algorithm, can decrease the running time to.

Accuracy

The error made by the primality test is measured by the probability that a composite number is declared probably prime. The more bases a are tried, the better the accuracy of the test. It can be shown that if n is composite, then at most of the bases a are strong liars for n. As a consequence, if n is composite then running k iterations of the Miller–Rabin test will declare n probably prime with a probability at most 4k.
This is an improvement over the Solovay–Strassen test, whose worst‐case error bound is 2k. Moreover, the Miller–Rabin test is strictly stronger than the Solovay–Strassen test in the sense that for every composite n, the set of strong liars for n is a subset of the set of Euler liars for n, and for many n, the subset is proper.
In addition, for large values of n, the probability for a composite number to be declared probably prime is often significantly smaller than 4k. For instance, for most numbers n, this probability is bounded by 8k; the proportion of numbers n which invalidate this upper bound vanishes as we consider larger values of n. Hence the average case has a much better accuracy than 4k, a fact which can be exploited for generating probable primes. However, such improved error bounds should not be relied upon to verify primes whose probability distribution is not controlled, since a cryptographic adversary might send a carefully chosen pseudoprime in order to defeat the primality test.
In such contexts, only the worst‐case error bound of 4k can be relied upon.
The above error measure is the probability for a composite number to be declared as a strong probable prime after k rounds of testing; in mathematical words, it is the conditional probability
where P is the event that the number being tested is prime, and MRk is the event that it passes the Miller–Rabin test with k rounds. We are often interested instead in the inverse conditional probability : the probability that a number which has been declared as a strong probable prime is in fact composite. These two probabilities are related by Bayes' law:
In the last equation, we simplified the expression using the fact that all prime numbers are correctly reported as strong probable primes. By dropping the left part of the denominator, we derive a simple upper bound:
Hence this conditional probability is related not only to the error measure discussed above — which is bounded by 4k — but also to the probability distribution of the input number. In the general case, as said [|earlier], this distribution is controlled by a cryptographic adversary, thus unknown, so we cannot deduce much about. However, in the case when we use the Miller–Rabin test to generate primes, the distribution is chosen by the generator itself, so we can exploit this result.

Combining multiple tests

Caldwell points out that strong probable prime tests to different bases sometimes provide an additional primality test. Just as the strong test checks for the existence of more than two square roots of 1 modulo n, two such tests can sometimes check for the existence of more than two square roots of −1.
Suppose that, in the course of our probable prime tests, we come across two bases a and for which with. This means that we have computed two square roots as part of the testing, and can check whether. This must always hold if n is prime; if not, we have found more than two square roots of −1 and proved that n is composite.
This is only possible if n ≡ 1, and we pass probable prime tests with two or more bases a such that ad ≢ ±1, but it is an inexpensive addition to the basic Miller–Rabin test.

Deterministic variants

Miller test

The Miller–Rabin algorithm can be made deterministic by trying all possible values of a [|below] a certain limit. Taking n as the limit would imply trials, hence the running time would be exponential with respect to the size of the input. To improve the running time, the challenge is then to lower the limit as much as possible while keeping the test reliable.
If the tested number n is composite, the strong liars a coprime to n are contained in a proper subgroup of the group *, which means that if we test all a from a set which generates *, one of them must lie outside the said subgroup, hence must be a witness for the compositeness of n. Assuming the truth of the generalized Riemann hypothesis, it is known that the group is generated by its elements smaller than, which was already noted by Miller. The constant involved in the Big O notation was reduced to 2 by Eric Bach. This leads to the following primality testing algorithm, known as the Miller test, which is deterministic assuming the extended Riemann hypothesis:
Input: n > 2, an odd integer to be tested for primality
Output: “composite” if n is composite, “prime” otherwise
let s > 0 and d odd > 0 such that n − 1 = 2sd # by factoring out powers of 2 from n − 1
for all a in the range :
xad mod n
repeat s times:
yx2 mod n
if y = 1 and x ≠ 1 and xn − 1 then # nontrivial square root of 1 modulo n
returncomposite
xy
if y ≠ 1 then
returncomposite
returnprime
The full power of the generalized Riemann hypothesis is not needed to ensure the correctness of the test: as we deal with subgroups of even index, it suffices to assume the validity of GRH for quadratic Dirichlet characters.
The running time of the algorithm is, in the soft-O notation, .
The Miller test is not used in practice. For most purposes, proper use of the probabilistic Miller–Rabin test or the Baillie–PSW primality test gives sufficient confidence while running much faster. It is also slower in practice than commonly used proof methods such as APR-CL and ECPP which give results that do not rely on unproven assumptions. For theoretical purposes requiring a deterministic polynomial time algorithm, it was superseded by the AKS primality test, which also does not rely on unproven assumptions.

Testing against small sets of bases

When the number n to be tested is small, trying all is not necessary, as much smaller sets of potential witnesses are known to suffice. For example, Pomerance, Selfridge, Wagstaff and Jaeschke have verified that
  • if n < 2,047, it is enough to test a = 2;
  • if n < 1,373,653, it is enough to test a = 2 and 3;
  • if n < 9,080,191, it is enough to test a = 31 and 73;
  • if n < 25,326,001, it is enough to test a = 2, 3, and 5;
  • if n < 3,215,031,751, it is enough to test a = 2, 3, 5, and 7;
  • if n < 4,759,123,141, it is enough to test a = 2, 7, and 61;
  • if n < 1,122,004,669,633, it is enough to test a = 2, 13, 23, and 1662803;
  • if n < 2,152,302,898,747, it is enough to test a = 2, 3, 5, 7, and 11;
  • if n < 3,474,749,660,383, it is enough to test a = 2, 3, 5, 7, 11, and 13;
  • if n < 341,550,071,728,321, it is enough to test a = 2, 3, 5, 7, 11, 13, and 17.
  • adding a test with a = 19 does not improve the preceding bound.
Using the 2010 work of Feitsma and Galway enumerating all base 2 pseudoprimes up to 264, this was extended, with the first result later shown using different methods in Jiang and Deng:
  • if n < 3,825,123,056,546,413,051, it is enough to test a = 2, 3, 5, 7, 11, 13, 17, 19, and 23.
  • adding tests with a = 29 and 31 does not improve the preceding bound.
  • if n < 264 = 18,446,744,073,709,551,616, it is enough to test a = 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, and 37.
Sorenson and Webster verify the above and calculate precise results for these larger than 64‐bit results:
  • if n < 318,665,857,834,031,151,167,461, it is enough to test a = 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, and 37.
  • if n < 3,317,044,064,679,887,385,961,981, it is enough to test a = 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, and 41.
Zhang provides further predictions, e.g. that 18 consecutive primes would be enough for n < 1,543,267,864,443,420,616,877,677,640,751,301. The numbers involved are too big to test by calculation.
Other criteria of this sort, often more efficient than those shown above, exist by removing the requirement for the bases to be consecutive. For example, two bases a = 336,781,006,125 and 9,639,812,373,923,155 are sufficient for n < 1,050,535,501; seven bases are sufficient for n < 264. Further optimization by Bradley Berg use a hashtable mapping ranges to bases, allowing n < 264 to be tested with at most 3 bases. They give very fast deterministic primality tests for numbers in the appropriate range, without any assumptions.
There is a small list of potential witnesses for every possible input size. However, no finite set of bases is sufficient for all composite numbers. Alford, Granville, and Pomerance have shown that there exist infinitely many composite numbers n whose smallest compositeness witness is at least. They also argue heuristically that the smallest number w such that every composite number below n has a compositeness witness less than w should be of order

Variants for finding factors

By inserting greatest common divisor calculations into the above algorithm, we can sometimes obtain a factor of n instead of merely determining that n is composite. This occurs for example when n is a probable prime to base a but not a strong probable prime to base a.
If x is a nontrivial square root of 1 modulo n,
  • since, we know that n divides ;
  • since, we know that n does not divide nor.
From this we deduce that and are nontrivial factors of n. Hence, if factoring is a goal, these gcd calculations can be inserted into the algorithm at little additional computational cost. This leads to the following pseudocode, where the added or changed code is highlighted:
Input #1: n > 2, an odd integer to be tested for primality
Input #2: k, the number of rounds of testing to perform
Output:
composite” if n is otherwise found to be composite,
probably prime” otherwise
let s > 0 and d odd > 0 such that n − 1 = 2sd # by factoring out powers of 2 from n − 1
repeat k times:
a ← random # n is always a probable prime to base 1 and n − 1
xad mod n
repeat s times:
yx2 mod n
if y = 1 and x ≠ 1 and xn − 1 then # nontrivial square root of 1 modulo n

xy
if y ≠ 1 then
returncomposite
returnprobably prime
This is not a probabilistic factorization algorithm because it is only able to find factors for numbers n which are pseudoprime to base a. For other numbers, the algorithm only returns "composite" with no further information.
For example, consider n = 341 and a = 2. We have. Then and. This tells us that n is a pseudoprime base 2, but not a strong pseudoprime base 2. By computing a gcd at this stage, we find a factor of 341:. Indeed,.
The same technique can be applied to the square roots of any other value, particularly the square roots of −1 mentioned in. If two strong probable prime tests find and, but, then and are nontrivial factors of n.
For example, is a strong pseudoprime to bases 2 and 7, but in the course of performing the tests we find
This gives us the factor.

Generation of probable primes

The Miller–Rabin test can be used to generate strong probable primes, simply by drawing integers at random until one passes the test. This algorithm terminates almost surely. The pseudocode for generating bbit strong probable primes is as follows:
Input #1: b, the number of bits of the result
Input #2: k, the number of rounds of testing to perform
Output: a strong probable prime n
while True:
pick a random odd integer n in the range
if the Miller–Rabin test with inputs n and k returns “probably primethen
return ''n''

Complexity

Of course the worst-case running time is infinite, since the outer loop may never terminate, but that happens with probability zero. As per the geometric distribution, the expected number of draws is .
As any prime number passes the test, the probability of being prime gives a coarse lower bound to the probability of passing the test. If we draw odd integers uniformly in the range, then we get:
where π is the prime-counting function. Using an asymptotic expansion of π, we can approximate this probability when b grows towards infinity. We find:
Hence we can expect the generator to run no more Miller–Rabin tests than a number proportional to b. Taking into account the worst-case complexity of each Miller–Rabin test, the expected running time of the generator with inputs b and k is then bounded by .

Accuracy

The error measure of this generator is the probability that it outputs a composite number.
Using the relation between conditional probabilities and the asymptotic behavior of, this error measure can be given a coarse upper bound:
Hence, for large enough b, this error measure is less than. However, much better bounds exist.
Using the fact that the Miller–Rabin test itself often has an error bound much smaller than 4k, Damgård, Landrock and Pomerance derived several error bounds for the generator, with various classes of parameters b and k. These error bounds allow an implementor to choose a reasonable k for a desired accuracy.
One of these error bounds is 4k, which holds for all b ≥ 2. Again this simple bound can be improved for large values of b. For instance, another bound derived by the same authors is:
which holds for all b ≥ 21 and kb/4. This bound is smaller than 4k as soon as b ≥ 32.