(notes by Roberto Bigoni)
As we know from our early arithmetic studies, prime numbers are the natural numbers greater than 1, which aren't the product of two other numbers both less than the number itself.
This definition implies that 1 is not a prime number, and that the only even prime number is 2.
Prime numbers are infinite, that is, the set of prime numbers does not have maximum. A proof by contradiction of this theorem is due to Euclid.
If prime numbers were not infinite, then would exist the maximum prime number M, and it would be possible to calculate the number P = 2·3·5·7·11·13·…·M, equal to the product of all the prime numbers.
Let us consider the natural number S immediately following P: S=P+1.
It is impossible that S is a prime number, because it is greater than the maximum M.
If we were to divide S for any prime number, we would have a quotient given by the product of the remaining prime numbers and the remainder 1, then S is not divisible by any of the prime numbers, so it must be prime.
We get two contradictory statements and we must therefore conclude that the primes are infinite.
Natural numbers greater than 1 that are not prime are said composite numbers.
The easiest method to get the set P_{n} of the primes ≤ n (n>13) is the
Sieve of Eratosthenes
(sieve: a tool used for separating coarse from fine parts of loose matter):
n
elements all equal to 0 except the first one which must be equal to the first prime number 2; 0 is the code for the numbers not yet processed;p
found, all the elements in positions multiple of p
are set equal to 1; 1 is the code for composite numbers;p'
in the position following that of p
, that is the position of the next prime number, is temporarily p
+1;p'
;p'
is composite and is increased by 1;p'
there is 0, that is p'
is a prime number;n
numbers is completed.Here a possible implementation in Javascript of the proposed procedure, easily translatable into other languages:
function sieve(n) //array of prime numbers less than n { var s = new Array(n); // if n is very large, the available memory may be insufficient var i; for (i=0; i<n; i++) s[i]=0; // all the elements of s are initially 0; s[0] = 2; // the first element of s is 2 var ix = 2, p = 0; while (ix<n) { for (i = ix*ix; i<n; i+=ix) s[i]=1; //the positions of the multiples of s[ix] are set to 1; these elements are composite p++; s[p] = s[p-1]+1; // in s [p] there is temporarily the number following the last prime number found while(s[p]<n && s[s[p]]) s[p]++; //as long as the element in position s[p] is composite (therefore 1), increase it ix = s[p]; //ix: the last prime number found } s.length = p; //we truncate the array s to the actual number of the prime numbers found return s; }
To check whether an odd number n is prime, that is, to check its primality, we can build the set P_{n} and check whether n ∈ P_{n}.
This procedure, however, is unnecessarily costly since a possible divisor of n can not exceed its square root. Therefore, we should
But if n is very large, the sieve becomes impractical because it may require too much memory and too much computation time. Primality tests that do not require the previous calculation of a huge number of primes, were searched and are searched even today.
One of such tests may be derived from a factorization method due to P. de Fermat.
Given the odd number n, the Fermat's factorization method is based on the research of two numbers h and k such that
If n is a perfect square, the solution is immediate because in this case n is equal to the product of its square root for itself and, of course, n is not prime.
If n is not too big it is convenient to apply the Sieve of Eratosthenes.
Otherwise we can always express hk as a difference of two squares:
then, with
we get
The problem is solved if we find a number a such that a^{2}-n is a perfect square.
For this purpose we proceed by trials:
Given a e b, we have
If a-b = 1, n is prime; otherwise the two factors a-b and a-b (that is h e k) are different and less than n, then n is composite.
Fermat's factorization method requires that we repeatedly check whether a natural number n is a perfect square. It is not possible to directly determine this property of n. However, in many cases we can eliminate quickly enough this possibility, noting that residues modulo m, for a given square number m, are quite a few.
For example, the quadratic residues modulo 8 for the squares of the first 100 natural numbers, using WolframAlpha, are
Even without a formal proof, it can safely be conjectured that by calculating the remainder of the division of a natural number by 8, if the remainder is equal to 0, 1, 4, the number is not a perfect square.
A further example is provided by the set of the quadratic residues modulo 9.
Even in this case we can safely conjecture that by calculating the remainder of the division of a number by 9, if the remainder is equal to 0, 1, 4, 7, the number is not a perfect square.
But to pass a test of this kind is necessary but not sufficient condition to conclude that n is a perfect square. It is a useful control to narrow down the search for possible perfect squares, but not conclusive. To successfully identify a perfect square we inevitably need a more expensive procedure, like the following bisection algorithm.
We consider the sequence of the natural numbers q_{i} that, starting from 4, is formed by numbers such that each of them is the square of the previous number.
Example.
Let n = 40401
We have 40401≡1 (mod 8) and 40401≡0 (mod 9), so n passes the two proposed preliminary tests. Now we apply the bisection algorithm:
A more direct method is even due to Fermat and is based on a theorem enunciated by Fermat himself, but subsequently demonstrated by L. Euler, known as Fermat's little theorem.
If p is a prime number, then, for any integer n,
where n^{p} ≡ n(mod p) means that the divisions by p of n^{p} and n give equal remainders. We say that n^{p} and n are congruent modulo p.
For example: given n=10 and p=3, we have:
A demonstration of the theorem may be obtained by induction.
First of all we observe that, if p is prime,
In fact, if we expand the power of the binomial, we get
where
The theorem holds for n=0 and n=1:
If the theorem holds for n
from the equation (1) we get
Then, by induction, the theorem holds for all n.
In particular, if n has no common divisors with p, that is it is coprime with respect to p, we have
The theorem states a necessary, but not sufficient condition, that is any prime p verify the stated congruence, but this congruence may be verified also by some composite number c. These numbers are said Fermat pseudoprimes with respect to n. In particular, the pseudoprimes c with respect to any n coprime with respect to c are said Carmichael numbers.
So, if a number q does not verify the theorem with respect to a coprime n, we can say that q is composite, but if it verifies the congruence, we can not say that it is prime, but only that it is a probable prime.
If we try with many random values of n with always positive results, we can operationally assume q as a prime number.
Examples.
1. Let q=41.
The probability that q is not a prime number is quite small. Indeed 41 is prime, as we can check with the Sieve of Eratosthenes.
2. Let q=91.
3. Let q=561.
If we apply the Fermat's factorization method, we get 561 = 17 · 33. So 561 is composite.
Now we try the Fermat primality test:
The Fermat primality test, even with several trials, can fail, that is it can lead us to consider as prime a number that instead is composite. We can reduce this risk and estimate the probability of getting the correct answer, noting that a prime number q>2 must be odd and then q-1 must be even. Moreover, every even number can be decomposed into the product of an odd number d and a power s of 2.
By Fermat's theorem, if q is prime, and n is coprime with respect to q, , we have
If this root is ≡ 1(mod q), then the next will be ≡ ±1 (mod q), and so on.
If all the roots are ≡ 1 (mod q), so even n^{d}≡1 (mod q), q passes the test and is probably prime.
If the first root not ≡ 1 (mod q) is = -1 (mod q), q passes the test and is probably prime.
Otherwise q is composite.
In conclusion, to determine whether q is probably prime:
The probability that a composite number q passes the test is at most ¼. Therefore, by repeating the test with other values of n, the probability that q can pass them all decreases exponentially.
Examples.
1. As we have seen the number q=561, is a Fermat pseudoprime, therefore, while being composite, it passes the Fermat test.
We decompose 560 into the product between an odd number and a power of 2: 560=35·2^{4}
With n=2, we have 2^{35}=34359738368; 2^{35}≡ 263 (mod 561)
35·2^{3}=280;
2^{280}=1942668892225729070919461906823518906642406839052139521251812409738904285205208498176
2^{280}≡1 (mod 561)
35·2^{2}=140;
2^{140}=1393796574908163946345982392040522594123776
2^{140}≡67 (mod 561)
So q is composite.
2. Let q=601
With n=2, we have
2^{600}=414951556888099295851240786369116115101244623224243689999565732969065281141290814639970704894710379428
8197886611300789182395151075411775307886874834113963687061181803401509523685376
2^{600}≡1 (mod 601): the number passes the Fermat test.
600=75*2^{3}
With n=2 we have
2^{75}=37778931862957161709568; 2^{75}≡ 1 (mod 601): 601 is a probable prime.
3. Let q=401
With n=2, we have
2^{400}=2582249878086908589655919172003011874329705792829223512830659356540647622016841194629645353280137831435903171972747493376
2^{400}≡1 (mod 401): 401 passes the Fermat test.
400=25·2^{4}
With n=2 we have
2^{25}=33554432; 2^{75}≡356 (mod 401)
2^{200}=1606938044258990275541962092341162602522202993782792835301376; 2^{200}≡ 1 (mod 401);
2^{100}=1267650600228229401496703205376; 2^{100}≡ -1 (mod 401): 401 is a probable prime.
A natural number n, greater than 1, either it is prime or it is composite.
Using a reliable test like that of Miller-Rabin, one can directly decide whether it is prime and therefore whether its factors are only 1 and n.
If n is composite we can use the sieve of Eratosthenes.
However, if n is very big, even √n is big and so is the number of the elements of P_{n}, the calculation of which may require impractical time.
If we try the Fermat method and the method is successful, n can be expressed as the product of two natural numbers h and k less than n. In their turn, each of the numbers h and k either is prime or is composite. If they are both prime, their product is the decomposition of n into prime factors product (or factorization of n). Otherwise, we can apply the Fermat's method to the composite factors and repeat the process until we have only prime factors. If at the end of the procedure a factor appears several times, the product of the identical factors is replaced by a power and the product of these powers (with exponent ≥ 1) is the unique decomposition of n into product of powers of prime factors.
The Fermat's method works satisfactorily for not too big numbers or, even for big numbers, if the two factors h and k are both next to √n. Otherwise, it may take an impractical computing time even on the fastest systems.
To overcome these difficulties, other methods of factoring have been proposed, such as, for example, the following algorithm due to John M. Pollard.
Let h be a divisor of n.
If, given s_{0} = a and the function f(s) = mod(s^{2}+2,n), we construct the sequence S where s_{i+1} = f( s_{i} ), then, after at most h+1 steps, we find a s_{s} such that s_{s} = s_{k} (k > s) and then the subsequent elements repeat cyclically.
Representing the elements of S as nodes of a graph, this takes on a shape that recalls the Greek letter ρ (Latin transcription 'rho', corresponding to our r lowercase, hence the name of the algorithm).
For example, given n = 51, h = 3 is a divisor of n. If we let s_{0} = 2, we have
i | s_{i} | f( s_{i} ) |
0 | 2 | 6 |
1 | 6 | 38 |
2 | 38 | 18 |
3 | 18 | 20 |
4 | 20 | 45 |
5 | 45 | 38 |
6 | 38 | 18 |
7 | 18 | 20 |
8 | 20 | 45 |
9 | 45 | 38 |
s_{2} = s_{6}, then s_{3} = s_{7} and so on.
If i ≠ j, s_{j} > s_{i} e s_{j} ≡ s_{i} (mod h), we have:
therefore
GCD ( s_{j} - s_{i} , n ) = h
If, continuing the proposed example, we construct a table of the absolute values of the differences d_{i,j} = s_{i} - s_{j} between all the calculated s_{k}, we observe that, i ≠ j, only in some cases the GCD(d_{i,j},n) is greater than 1 and that in these cases GCD = 3, ie h.
2 | 6 | 38 | 18 | 20 | 45 | 38 | 18 | 20 | 45 | |
2 | 0 | 4 | 36 | 16 | 18 | 43 | 36 | 16 | 18 | 43 |
6 | 4 | 0 | 30 | 12 | 14 | 39 | 30 | 12 | 14 | 39 |
38 | 36 | 32 | 0 | 20 | 18 | 7 | 0 | 20 | 18 | 7 |
18 | 16 | 12 | 20 | 0 | 2 | 27 | 20 | 0 | 2 | 27 |
20 | 18 | 14 | 18 | 2 | 0 | 25 | 18 | 2 | 0 | 25 |
45 | 43 | 39 | 7 | 27 | 25 | 0 | 7 | 27 | 25 | 0 |
38 | 36 | 30 | 0 | 20 | 18 | 7 | 0 | 20 | 18 | 7 |
18 | 16 | 12 | 20 | 0 | 2 | 27 | 20 | 0 | 2 | 27 |
20 | 18 | 14 | 18 | 2 | 0 | 25 | 18 | 2 | 0 | 25 |
45 | 43 | 39 | 7 | 27 | 25 | 0 | 7 | 27 | 25 | 0 |
So if we choose randomly s_{i} and s_{j} with i ≠ j and GCD(d_{i,j},n) > 1, then GCD (d_{i,j},n) = h.
We can avoid random attempts by matching the terms of the sequence S with those of the sequence T in which, given t_{0} = 2, t_{i+1} = f( f( t_{i} ) ).
In the proposed example:
i | s_{i} | t_{i} | | t_{i} - s_{i} | |
0 | 2 | 2 | 0 |
1 | 6 | 38 | 32 |
2 | 38 | 20 | 18 |
3 | 18 | 38 | 20 |
4 | 20 | 20 | 0 |
5 | 45 | 38 | 7 |
6 | 38 | 20 | 18 |
7 | 18 | 38 | 20 |
8 | 20 | 20 | 0 |
9 | 45 | 38 | 7 |
Obviously the set of terms of the sequence T is a subset of the set of terms of S, so the difference between a term of T and a term of S is also a difference between two terms of S. By matching the terms of the two sequences and calculating the GCD between n and the absolute value of the difference t_{i} - s_{i}, if the GCD is greater than 1, it is a divisor of n ie h.
We can then calculate k = n / h.
Now
The applications of this page are implemented in JS and, while enhanced by the BigInt library of Leemon Baird, they are inevitably too slow to handle very big numbers. If we have to decompose very big numbers, we can try to use WolframAlpha.
It should however be noted that, even using the most advanced software and specially designed hardware, the factorization of a very big number may require unworkable computational time and this fact is the basis of the most efficient encryption methods, such as RSA.
last revision May 2018