Consider the problem of computing the exponential of a given number.
We would like a
function
that takes as arguments a base $b$ and a positive
integer exponent $n$ and
computes $b^n$. One way to do this is via
the recursive definition
function expt(b, n) {
return n === 0
? 1
: b * expt(b, n - 1);
}
This is a linear recursive process, which requires
$\Theta(n)$ steps and
$\Theta(n)$ space. Just as with factorial, we
can readily formulate an equivalent linear iteration:
This method works fine for exponents that are powers of 2. We can also take
advantage of successive squaring in computing exponentials in general if we
use the rule
The process evolved by
fast_expt
grows logarithmically with $n$ in both space and
number of steps. To see this, observe that computing
$b^{2n}$ using
fast_expt
requires only one more multiplication than computing
$b^n$. The size of the exponent we can compute
therefore doubles (approximately) with every new multiplication we are
allowed. Thus, the number of multiplications required for an exponent of
$n$ grows about as fast as the logarithm of
$n$ to the base 2. The process has
$\Theta(\log n)$ growth.
The difference between $\Theta(\log n)$ growth
and $\Theta(n)$ growth becomes striking as
$n$ becomes large. For example,
fast_expt
for $n=1000$ requires only 14
multiplications.
It is also possible to use the idea of successive squaring to devise an
iterative algorithm that computes exponentials with a logarithmic number of
steps (see exercise 1.16), although, as is
often the case with iterative algorithms, this is not written down so
straightforwardly as the recursive algorithm.
Design a
function
that evolves an iterative exponentiation process that uses successive
squaring and uses a logarithmic number of steps, as does
fast_expt.
(Hint: Using the observation that
$(b^{n/2})^2 =(b^2)^{n/2}$, keep, along with the
exponent $n$ and the base
$b$, an additional state variable
$a$, and define the state transformation in such
a way that the product $a b^n$ is unchanged from
state to state. At the beginning of the process
$a$ is taken to be 1, and the answer is given by
the value of $a$ at the end of the process. In
general, the technique of defining an
invariant quantity that remains unchanged from state to state is a
powerful way to think about the
design of
iterative algorithms.)
The exponentiation algorithms in this section are based on performing
exponentiation by means of repeated multiplication. In a similar way,
one can perform integer multiplication by means of repeated addition.
The following multiplication
function
(in which it is assumed that our language can only add, not multiply) is
analogous to the expt
function:
function times(a, b) {
return b === 0
? 0
: a + times(a, b - 1);
}
This algorithm takes a number of steps that is linear in
b. Now suppose we include, together with
addition,
the functions
double, which doubles an
integer, and halve, which divides an (even)
integer by 2. Using these, design a multiplication
function
analogous to
fast_expt
that uses a logarithmic number of steps.
このアルゴリズムは b に対して線形のステップ数を必要とします。ここで、足し算に加えて、整数を倍にする double と(偶数の)整数を 2 で割る halve という関数が使えるとしましょう。これらを使って、fast_expt と類似の、対数的なステップ数の掛け算関数を設計してください。
function double(x) {
return x + x;
}
function halve(x) {
return x / 2;
}
function fast_times(a, b) {
return b === 1
? a
: a === 0 || b === 0
? 0
: is_even(b)
? double(fast_times(a, halve(b)))
: a + fast_times(a, b - 1);
}
Using the results of exercises 1.16
and 1.17, devise a
function
that generates an iterative process for multiplying two integers in terms
of adding, doubling, and halving and uses a logarithmic number of
steps.
function double(x) {
return x + x;
}
function half(x) {
return x / 2;
}
function fast_times_iter(total, a, b) {
return b === 1
? total + a
: a === 0 || b===0
? 0
: is_even(b)
? fast_times_iter(total, double(a), half(b))
: fast_times_iter(total + a, a, b - 1);
}
function times(a, b) {
return fast_times_iter(0, a, b);
}
There is a clever algorithm for computing the Fibonacci numbers in a
logarithmic number of steps. Recall the transformation of the state
variables $a$ and
$b$ in the
fib_iter
process of section 1.2.2:
$a\leftarrow a+b$ and
$b\leftarrow a$. Call this transformation
$T$, and observe that applying
$T$ over
and over again $n$ times, starting with 1 and 0,
produces the pair $\textrm{Fib}(n+1)$ and
$\textrm{Fib}(n)$. In other words, the
Fibonacci numbers are produced by applying
$T^n$, the $n$th
power of the transformation $T$, starting with
the pair $(1,0)$. Now consider
$T$ to be the special case of
$p=0$ and $q=1$ in
a family of transformations $T_{pq}$, where
$T_{pq}$ transforms the pair
$(a,b)$ according to
$a\leftarrow bq+aq+ap$ and
$b\leftarrow bp+aq$. Show that if we apply such
a transformation $T_{pq}$ twice, the effect is
the same as using a single transformation
$T_{p'q'}$ of the same form, and compute
$p'$ and $q'$ in
terms of $p$
and $q$. This gives us an explicit way
to square these transformations, and thus we can compute
$T^n$ using successive squaring, as in the
fast_expt
function.
Put this all together to complete the following
function,
which runs in a logarithmic number of steps:
[5]
function fib(n) {
return fib_iter(1, 0, 0, 1, n);
}
function fib_iter(a, b, p, q, count) {
return count === 0
? b
: is_even(count)
? fib_iter(a,
b,
$\langle{}$??$\rangle$, // compute p'
$\langle{}$??$\rangle$, // compute q'
count / 2)
: fib_iter(b * q + a * q + a * p,
b * p + a * q,
p,
q,
count - 1);
}
function fib(n) {
return fib_iter(1, 0, 0, 1, n);
}
function fib_iter(a, b, p, q, count) {
return count === 0
? b
: is_even(count)
? fib_iter(a,
b,
p * p + q * q,
2 * p * q + q * q,
count / 2)
: fib_iter(b * q + a * q + a * p,
b * p + a * q,
p,
q,
count - 1);
}
More precisely,
the number of multiplications required is equal to 1 less than the log
base 2 of $n$, plus the number of ones in the
binary representation of $n$. This total is
always less than twice the log base 2 of $n$.
The arbitrary constants $k_1$ and
$k_2$ in the definition of order notation imply
that, for a logarithmic process, the base to which logarithms are taken does
not matter, so all such processes are described as
$\Theta(\log n)$.
This iterative
algorithm is ancient. It appears in the
Chandah-sutra by
Áchárya, written before 200 BCE.
See
Knuth 1997b, section 4.6.3, for a full discussion
and analysis of this and other methods of exponentiation.
This
algorithm, which is sometimes known as the
Russian peasant
method of multiplication, is ancient. Examples of its use are
found in the
Rhind Papyrus, one of the two oldest mathematical documents in existence,
written about 1700 BCE (and copied from an even
older document) by an Egyptian scribe named
A'h-mose.