
Learn through the super-clean Baeldung Pro experience:
>> Membership and Baeldung Pro.
No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.
Last updated: March 18, 2024
We studied in a previous tutorial the general laws and computational complexity of cryptographic algorithms. Many of these algorithms rely on the generation and manipulation of very large primes. The reasonable assurance of their primality becomes, therefore, an essential aspect.
The most immediate verification of primality is the trial division. This consists of an attempt to divide successively by every integer
. However, it requires an exponential and not a polynomial time.
In this tutorial, we’ll study one of the methods used to verify if is prime without resorting to factorization: the Miller-Rabin method. We’ll introduce this method as a way of reducing the error rate of a simpler procedure: the pseudoprimality test.
Congruence modulo is a congruence relation, meaning that it’s an equivalence relation that is compatible with the operations of addition, subtraction, and multiplication. We write:
The parentheses mean that applies to the entire equation, not just to the right-hand side (here
).
It is said that is a pseudoprime of base-
if
is composite and occurs:
This definition allows us to arrive at some fundamental conclusions:
The following pseudocode implements the primality test defined in the last point, for :
algorithm Pseudoprime(n):
// INPUT
// n = the number to test for pseudoprimality
// OUTPUT
// COMPOSITE if n is definitely composite
// PRIME if n is probably prime
if 2^(n-1) mod n != 1:
return COMPOSITE
else:
return PRIME
The PSEUDOPRIME function produces errors when is prime but is also base-2 pseudoprime. If it is composite, it always gives a correct result. The following section shows how we can reduce this error rate.
The Miller-Rabin test implements two modifications to the PSEUDOPRIME function. The first one is to choose several randomly selected values rather than using a single base value. The second modification lies in an important number theory theorem, which demonstrates the following:
Given two numbers
and
, where
is an odd prime number and
, then the equation:
has only the solutions
and
.
A corollary of this theorem states that if there exists a non-trivial square root of , that is:
then is composite.
On the basis of this corollary, the Miller-Rabin test calculates each modular exponentiation and checks if there’s a non-trivial square root of . In this case, the test ends with the COMPOSITE result.
The Miller-Rabin test is a probabilistic search for proof that is composite.
For each it is possible to realize the control we used in the PSEUDOPRIME procedure with
:
However, it is more efficient to introduce the concept of witness:
is said to be a witness that
is composite if it is possible to use
to prove that
is composite.
The concept of witness is not a simple implementation question. It is fundamental from a theoretical point of view and supposes a stronger condition than the pseudoprimality test.
We’ll leave out the technical details of building this procedure. Suppose we have a WITNESS (a, n) function. If this function returns TRUE, then is a witness that
is composite with absolute certainty.
In pseudocode, we give the Miller-Rabin procedure, where is the number of base values to test randomly chosen from
:
algorithm MillerRabinTest(n, s):
// INPUT
// n = the number to test for primality
// s = the number of base values to test
// OUTPUT
// COMPOSITE = certainty that n is composite
// PRIME = almost surely that n is prime
for j <- 1 to s:
a <- RANDOM(1, n-1)
if WITNESS(a, n):
return COMPOSITE // certainty
else:
return PRIME // almost surely
The conclusion that one of the leads to the fact that
is composite is always a correct result.
Suppose that no witness to the fact that is composite can be found in
proofs. In this case, the MILLER-RABIN function assumes that this happens because there’re no witnesses to find and therefore
is prime.
If is large enough, the primality conclusion of the MILLER-RABIN function is very likely to be correct. However, there’s a small chance that the procedure will give a false positive if the choice of the
base is unfortunate, that is, that some witness exists even if none have been found.
For a number of bits, the MILLER-RABIN function requires
arithmetic operations and
bit operations.
There’re two reasons why the Miller-Rabin test is preferable to the pseudoprimality test:
Theory justifies the second point by the following theorem:
If
is a composite odd number, then the number of witnesses that
is composite is at least
.
This result ensures the improbability of not finding witnesses if is composite.
A precise estimate of the error rate of the Miller-Rabin test is provided by the following theorem:
For any odd integer
and a positive integer
, the probability that the Miller-Rabin test gives an error is at most
.
The proof of this result is immediate if we analyze the pseudocode above. If is composite, each execution of the loop between lines 2 and 9 has at least
probability of discovering a witness. For
iterations, the total probability is therefore
.
An is sufficient for virtually every conceivable application, but for some uses, it can be much lower. For example, if we want to find large primes by applying the Miller-Rabin test to randomly chosen large integers, then it can be shown that the generation of a false positive is unlikely with only
. For non-randomly numbers, the probability of error increases.
In this tutorial, we made a brief excursion on the principles of some primality tests.
Given the non-polynomiality of the test by trial division, alternative procedures are necessary, of which the simplest is the pseudoprimality test. While quite effective, it produces an error rate that may be unacceptable for many applications.
This leads to more elaborate methods, such as the Miller-Rabin test.