This section describes two methods for checking the primality of an
integer $n$, one with order of growth
$\Theta(\sqrt{n})$, and a
probabilistic algorithm with order of growth
$\Theta(\log n)$. The exercises at the end of
this section suggest programming projects based on these algorithms.
Since ancient times, mathematicians have been fascinated by problems
concerning prime numbers, and many people have worked on the problem
of determining ways to test if numbers are prime. One way
to test if a number is prime is to find the number's divisors. The
following program finds the smallest integral divisor (greater than 1)
of a given number $n$. It does this in a
straightforward way, by testing $n$ for
divisibility by successive integers starting with 2.
function smallest_divisor(n) {
return find_divisor(n, 2);
}
function find_divisor(n, test_divisor) {
return square(test_divisor) > n
? n
: divides(test_divisor, n)
? test_divisor
: find_divisor(n, test_divisor + 1);
}
function divides(a, b) {
return b % a === 0;
}
We can test whether a number is prime as follows:
$n$ is prime if and only if
$n$ is its own smallest divisor.
This means that the algorithm need only test divisors between 1 and
$\sqrt{n}$. Consequently, the number of steps
required to identify $n$ as prime will have order
of growth $\Theta(\sqrt{n})$.
Fermat's Little Theorem:
If $n$ is a prime number and
$a$ is any positive integer less than
$n$, then $a$ raised
to the $n$th power is congruent to
$a$ modulo $n$.
(Two numbers are said to be
congruent modulo
$n$ if they both have the same remainder when
divided by $n$. The remainder of a number
$a$ when divided by
$n$ is also referred to as the
remainder of $a$ modulo
$n$, or simply as $a$
modulo $n$.)
If $n$ is not prime, then, in general, most of
the numbers $a < n$ will not satisfy the above
relation. This leads to the following algorithm for testing primality:
Given a number $n$, pick a
random number $a < n$ and compute the
remainder of $a^n$ modulo
$n$. If the result is not equal to
$a$, then $n$ is
certainly not prime. If it is $a$, then chances
are good that $n$ is prime. Now pick another
random number $a$ and test it with the same
method. If it also satisfies the equation, then we can be even more
confident that $n$ is prime. By trying more and
more values of $a$, we can increase our
confidence in the result. This algorithm is known as the Fermat test.
To implement the Fermat test, we need a
function
that computes the
exponential of a number modulo another number:
フェルマーテストを実装するには、ある数の別の数を法とする
べき乗を計算する
関数
が必要です。
function expmod(base, exp, m) {
return exp === 0
? 1
: is_even(exp)
? square(expmod(base, exp / 2, m)) % m
: (base * expmod(base, exp - 1, m)) % m;
}
This is very similar to the
fast_expt
function
of section 1.2.4. It uses successive
squaring, so that the number of steps grows logarithmically with the
exponent.
The Fermat test is performed by choosing at random a number
$a$ between 1 and
$n-1$ inclusive and checking whether the remainder
modulo $n$ of the
$n$th power of $a$ is
equal to $a$. The random number
$a$ is chosen using the
primitive function
math_random,
which returns a nonnegative number less than 1. Hence, to obtain
a random number between 1 and $n-1$, we multiply
the return value of math_random by
$n-1$, round down the result with the
primitive function
math_floor,
and add 1:
function fermat_test(n) {
function try_it(a) {
return expmod(a, n, n) === a;
}
return try_it(1 + math_floor(math_random() * (n - 1)));
}
The following
function
runs the test a given number of times, as specified by a parameter. Its
value is true if the test succeeds every time, and false otherwise.
function fast_is_prime(n, times) {
return times === 0
? true
: fermat_test(n)
? fast_is_prime(n, times - 1)
: false;
}
Probabilistic methods
The Fermat test differs in character from most familiar algorithms, in which
one computes an answer that is guaranteed to be correct. Here, the answer
obtained is only probably correct. More precisely, if
$n$ ever fails the Fermat test, we can be certain
that $n$ is not prime. But the fact that
$n$ passes the test, while an extremely strong
indication, is still not a guarantee that $n$ is
prime. What we would like to say is that for any number
$n$, if we perform the test enough times and find
that $n$ always passes the test, then the
probability of error in our primality test can be made as small as we like.
Unfortunately, this assertion is not quite correct. There do exist numbers
that fool the Fermat test: numbers $n$ that are
not prime and yet have the property that $a^n$ is
congruent to $a$ modulo
$n$ for all integers
$a < n$. Such numbers are extremely rare, so
the Fermat test is quite reliable in practice.
There are variations of the Fermat test that cannot be fooled. In these
tests, as with the Fermat method, one tests the primality of an integer
$n$ by choosing a random integer
$a < n$ and checking some condition that
depends upon $n$ and
$a$. (See
exercise 1.28 for an example of such a test.)
On the other hand, in contrast to the Fermat test, one can prove that, for
any $n$, the condition does not hold for most of
the integers $a < n$ unless
$n$ is prime. Thus, if
$n$ passes the test for some random choice
of $a$, the chances are better than even
that $n$ is prime. If
$n$ passes the test for two random choices of
$a$, the chances are better than 3 out of 4 that
$n$ is prime. By running the test with more and
more randomly chosen values of $a$ we can make
the probability of error as small as we like.
The existence of tests for which one can prove that the chance of error
becomes arbitrarily small has sparked interest in algorithms of this type,
which have come to be known as probabilistic algorithms. There is
a great deal of research activity in this area, and probabilistic algorithms
have been fruitfully applied to many fields.
Assume a primitive function
get_time of no arguments
that returns the number of milliseconds that have passed since 00:00:00 UTC
on Thursday, 1 January, 1970.
引数なしのプリミティブ関数
get_time があり、1970年1月1日木曜日 00:00:00 UTC からの経過ミリ秒数を返すものとします。
The following
timed_prime_test
function,
when called with an integer $n$, prints
$n$ and checks to see if
$n$ is prime. If $n$
is prime, the
function
prints three asterisks
Using this
function,
write a
function
search_for_primes
that checks the primality of consecutive odd integers in a specified range.
Use your
function
to find the three smallest primes larger than 1000; larger than 10,000;
larger than 100,000; larger than 1,000,000. Note the time needed to test
each prime. Since the testing algorithm has order of growth of
$\Theta(\sqrt{n})$, you should expect that testing
for primes around 10,000 should take about
$\sqrt{10}$ times as long as testing for primes
around 1000. Do your timing data bear this out? How well do the data for
100,000 and 1,000,000 support the $\sqrt{n}$
prediction? Is your result compatible with the notion that programs on
your machine run in time proportional to the number of steps required for
the computation?
The
smallest_divisor
function
shown at the start of this section does lots of needless testing: After it
checks to see if the number is divisible by 2 there is no point in checking
to see if it is divisible by any larger even numbers. This suggests that
the values used for
test_divisor
should not be 2, 3, 4, 5, 6, … but rather 2, 3, 5, 7, 9,
…. To implement this change,
declare a function
next that returns 3 if its input is equal to 2
and otherwise returns its input plus 2. Modify the
smallest_divisor
function
to use
next(test_divisor)
instead of
test_divisor + 1.
With
timed_prime_test
incorporating this modified version of
smallest_divisor,
run the test for each of the 12 primes found in
exercise 1.22.
Since this modification halves the number of test steps, you should expect
it to run about twice as fast. Is this expectation confirmed? If not, what
is the observed ratio of the speeds of the two algorithms, and how do you
explain the fact that it is different from 2?
function next(input) {
return input === 2
? 3
: input + 2;
}
function find_divisor(n, test_divisor) {
return square(test_divisor) > n
? n
: divides(test_divisor, n)
? test_divisor
: find_divisor(n, next(test_divisor));
}
The ratio of the speeds of the two algorithms is not exactly 2, but this might be due to hardware / network issues.
It is about 1.5 times faster compared to previous solution.
Modify the
timed_prime_test
function
of exercise 1.22 to use
fast_is_prime
(the Fermat method), and test each of the 12 primes you found in that
exercise. Since the Fermat test has
$\Theta(\log n)$ growth, how would you expect
the time to test primes near 1,000,000 to compare with the time needed to
test primes near 1000? Do your data bear this out? Can you explain any
discrepancy you find?
function timed_prime_test(n) {
display(n);
return start_prime_test(n, get_time());
}
function start_prime_test(n, start_time) {
return fast_is_prime(n, math_floor(math_log(n)))
? report_prime(get_time() - start_time)
: true;
}
function report_prime(elapsed_time) {
display(" *** ");
display(elapsed_time);
}
The time to test primes near 1,000,000 using fast_is_prime
is about 4 ms, 4 times the time needed to test primes near 1,000. This is faster compared to 8 ms
that we achieved if we use the is_prime. However, despite being 4 times slower,
this fact cannot lead us to believe that it has a greater growth than $\Theta(\log n)$,
as it should be tested with greater numbers to gain a more accurate understanding of the growth of the function.
Alyssa P. Hacker complains that we went to a lot of extra work in writing
expmod. After all, she says, since we already
know how to compute exponentials, we could have simply written
Alyssa P. Hacker は、expmod を書くのに余計な手間をかけすぎだと不満を述べています。結局のところ、べき乗の計算方法は既に知っているのだから、単純に次のように書けばよかったはずだと彼女は言います。
function expmod(base, exp, m) {
return fast_expt(base, exp) % m;
}
Is she correct?
Would this
function
serve as well for our fast prime tester? Explain.
Alyssa's suggestion is correct at first sight: her
expmod function computes
$\textit{base}^{\textit{exp}}$ and
then finds its remainder modulo $m$, as
required in the Fermat test.
However, for large bases, Alyssa's method will quickly bump into
limitations because JavaScript uses 64 bits to represent numbers,
following the double-precision floating point standard. When the
numbers become so large that they cannot be represented precisely
any longer in this standard, the results become unreliable. Even
worse, the method might exceed the largest number that can be
represented in this standard, and the computation leads to an
error.
For small bases, however, Alyssa's method may be even faster than
the original expmod function,
because it will carry out only one single remainder operation.
Louis Reasoner is having great difficulty doing
exercise 1.24.
His
fast_is_prime
test seems to run more slowly than his
is_prime
test. Louis calls his friend Eva Lu Ator over to help. When they examine
Louis's code, they find that he has rewritten the
expmod
function
to use an explicit multiplication, rather than calling
square:
Louis Reasoner は演習問題 1.24に大変苦労しています。彼の
fast_is_prime
テストは、
is_prime
テストよりも遅く実行されるようです。Louis は友人の Eva Lu Ator を呼んで助けを求めます。2人で Louis のコードを調べると、彼は expmod
関数
を、square を呼ぶ代わりに明示的な乗算を使うように書き直していたことがわかります。
function expmod(base, exp, m) {
return exp === 0
? 1
: is_even(exp)
? ( expmod(base, exp / 2, m)
* expmod(base, exp / 2, m)) % m
: (base * expmod(base, exp - 1, m)) % m;
}
I don't see what difference that could make,
says Louis. I do. says Eva. By writing the
function
like that, you have transformed the
$\Theta(\log n)$ process into a
$\Theta(n)$ process. Explain.
それでどんな違いが出るのかわからないな、と Louis は言います。わかるわ。と Eva は言います。そのように
関数
を書くことで、$\Theta(\log n)$ のプロセスを $\Theta(n)$ のプロセスに変えてしまったのよ。これを説明してください。
Eva is correct: by evaluating the expression:
Eva は正しいです。次の式を評価することで、
(expmod(base, exp / 2, m) * expmod(base, exp / 2, m)) % m
the expression
expmod(base, exp / 2, m)
is evaluated twice at each step in the computation when the exponent is
even, eliminating the benefit of the fast exponentiation
algorithm—which halves the exponent when the exponent is
even—therefore eliminating the feature of the algorithm that makes
it faster.
式 expmod(base, exp / 2, m) は、指数が偶数のとき計算の各ステップで2回評価されます。これにより、高速べき乗アルゴリズムの利点――指数が偶数のとき指数を半分にするという特徴――が失われ、アルゴリズムを速くしていた特徴がなくなってしまいます。
Demonstrate that the
Carmichael numbers listed in
footnote 4 really do fool the Fermat
test. That is, write a
function
that takes an integer $n$ and tests whether
$a^n$ is congruent to
$a$ modulo $n$ for
every $a < n$, and try your
function
on the given Carmichael numbers.
One variant of the Fermat test that cannot be fooled is called the
Miller–Rabin test (Miller 1976;
Rabin 1980). This starts from
an alternate form of
Fermat's Little Theorem, which states that if
$n$ is a prime number and
$a$ is any positive integer less than
$n$, then $a$ raised
to the $(n-1)$st power is congruent to 1
modulo $n$. To test the primality of a
number $n$ by the Miller–Rabin test, we pick a
random number $a < n$ and raise
$a$ to the $(n-1)$st
power modulo $n$ using the
expmod
function.
However, whenever we perform the squaring step in
expmod, we check to see if we have discovered a
nontrivial square root of 1
modulo $n$,
that is, a number not equal to 1 or $n-1$ whose
square is equal to 1 modulo $n$. It is
possible to prove that if such a nontrivial square root of 1 exists, then
$n$ is not prime. It is also possible to prove
that if $n$ is an odd number that is not prime,
then, for at least half the numbers $a < n$,
computing $a^{n-1}$ in this way will reveal a
nontrivial square root of 1 modulo $n$.
(This is why the Miller–Rabin test cannot be fooled.) Modify the
expmod
function
to signal if it discovers a nontrivial square root of 1, and use this to
implement the Miller–Rabin test with a
function
analogous to
fermat_test.
Check your
function
by testing various known primes and non-primes. Hint: One convenient way to
make expmod signal is to have it return 0.
Pierre
de Fermat (1601–1665) is considered to be
the founder of modern
number theory. He obtained many important number-theoretic results,
but he usually announced just the results, without providing his proofs.
Fermat's Little Theorem was stated in a letter he wrote in 1640.
The first published proof was given by
Euler in 1736 (and an
earlier, identical proof was discovered in the unpublished manuscripts
of
Leibniz). The most famous of Fermat's results—known as
Fermat's Last Theorem—was jotted down in 1637 in his copy of
the book Arithmetic (by the third-century Greek mathematician
Diophantus) with the remark I have discovered a truly remarkable
proof, but this margin is too small to contain it. Finding a proof
of Fermat's Last Theorem became one of the most famous challenges in
number theory. A complete
solution was finally given in 1995 by
Andrew Wiles of Princeton
University.
Pierre
de Fermat(1601–1665)は、近代
数論の創始者とされています。彼は多くの重要な数論的成果を得ましたが、通常は証明を示さず結果だけを発表しました。
フェルマーの小定理は、彼が1640年に書いた手紙の中で述べられました。最初に出版された証明は、1736年に
オイラーによって与えられました(それより前の同一の証明が、
ライプニッツの未公開の原稿の中に発見されています)。フェルマーの最も有名な成果――フェルマーの最終定理として知られるもの――は、1637年に3世紀のギリシャの数学者
ディオファントスの著書算術の自分の写本に、私は真に驚くべき証明を発見したが、この余白はそれを書くには狭すぎる。という注記とともに書き留められました。フェルマーの最終定理の証明を見つけることは、数論における最も有名な課題の1つとなりました。完全な解決は、1995年にプリンストン大学の
Andrew Wiles によってようやく与えられました。
The reduction steps in the cases where the exponent
$e$ is greater than 1 are based on the fact that,
for any integers $x$,
$y$, and $m$, we can
find the remainder of $x$ times
$y$ modulo $m$ by
computing separately the remainders of $x$ modulo
$m$ and $y$ modulo
$m$, multiplying these, and then taking the
remainder of the result modulo $m$. For
instance, in the case where $e$ is even, we
compute the remainder of $b^{e/2}$ modulo
$m$, square this, and take the remainder modulo
$m$. This technique is useful because it means
we can perform our computation without ever having to deal with numbers much
larger than $m$. (Compare
exercise 1.25.)
Numbers that fool the
Fermat test are called
Carmichael numbers, and little is known
about them other than that they are extremely rare. There are 255
Carmichael numbers below 100,000,000. The smallest few are 561, 1105,
1729, 2465, 2821, and 6601. In testing primality of very large
numbers chosen at random, the chance of stumbling upon a value that
fools the Fermat test is less than the chance that
cosmic radiation will cause the computer to make an error in carrying out a
correct algorithm. Considering an algorithm to be inadequate
for the first reason but not for the second illustrates the difference
between
mathematics and engineering.
One of the most
striking applications of
probabilistic prime testing has been to the field of
cryptography.
Although it is computationally infeasible to factor an arbitrary 300-digit
number as of this writing (2021), the primality of such a number can be checked
in a few seconds with the Fermat test.
This fact forms the basis of a technique for constructing
unbreakable codes suggested by
Rivest,
Shamir, and
Adleman (1977). The resulting
RSA algorithm has become a widely used technique for enhancing the
security of electronic communications. Because of this and related
developments, the study of
prime numbers, once considered the epitome of a topic in pure
mathematics to be studied only for its own sake, now turns out to have
important practical applications to cryptography, electronic funds transfer,
and information retrieval.
The primitive
function display returns its
argument, but also prints it. Here
"***"
is a
string, a sequence of characters that we pass as argument
to the display function.
Section 2.3.1 introduces strings more
thoroughly.