The above examples demonstrate how the ability to pass
functions
as arguments significantly enhances the expressive power of our programming
language. We can achieve even more expressive power by creating
functions
whose returned values are themselves
functions.
We can illustrate this idea by looking again at the fixed-point example
described at the end of
section 1.3.3. We formulated a new
version of the square-root
function
as a fixed-point search, starting with the observation that
$\sqrt{x}$ is a fixed-point of the function
$y\mapsto x/y$. Then we used average damping to
make the approximations converge. Average damping is a useful general
technique in itself. Namely, given a
function $f$, we consider the function
whose value at $x$ is equal to the average of
$x$ and $f(x)$.
We can express the idea of average damping by means of the following
function:
平均減衰のアイデアを次の
関数
で表現できます:
function average_damp(f) {
return x => average(x, f(x));
}
The function
average_damp
takes as its argument a
function
f and returns as its value a
function
(produced by the lambda expression)
that, when applied to a number x, produces the
average of x and
f(x).
For example, applying
average_damp
to the square
function
produces a
function
whose value at some number $x$ is the average of
$x$ and $x^2$.
Applying this resulting
function
to 10 returns the average of 10 and 100, or 55:
関数 average_damp は
引数として
関数
f を取り、値として、数 x に適用されると x と
f(x)
の平均を生成する
関数(ラムダ式によって生成)
を返します。例えば、
average_damp
を square
関数
に適用すると、ある数 $x$ での値が $x$ と $x^2$ の平均である
関数
が生成されます。この結果の
関数
を10に適用すると、10と100の平均、つまり55が返されます:
Using
average_damp,
we can reformulate the
square-root
function
as follows:
average_damp
を使えば、
平方根
関数
を次のように再定式化できます:
function sqrt(x) {
return fixed_point(average_damp(y => x / y), 1);
}
Notice how this formulation makes explicit the three ideas in the method:
fixed-point search, average damping, and the function
$y\mapsto x/y$. It is instructive to compare
this formulation of the square-root method with the original version given
in section 1.1.7. Bear in mind that these
functions
express the same process, and notice how much clearer the idea becomes when
we express the process in terms of these abstractions. In general, there
are many ways to formulate a process as a
function.
Experienced programmers know how to choose
process
formulations that are particularly perspicuous, and where useful elements of
the process are exposed as separate entities that can be reused in other
applications. As a simple example of reuse, notice that the cube root of
$x$ is a fixed point of the function
$y\mapsto x/y^2$, so we can immediately
generalize our square-root
function
to one that extracts
cube roots:
function cube_root(x) {
return fixed_point(average_damp(y => x / square(y)), 1);
}
Newton's method
ニュートン法
When we first introduced the square-root
function,
in section 1.1.7, we mentioned that this was a
special case of
Newton's method. If
$x\mapsto g(x)$ is a differentiable function,
then a solution of the equation $g(x)=0$ is a
fixed point of the function $x\mapsto f(x)$ where
and $Dg(x)$ is the derivative of
$g$ evaluated at $x$.
Newton's method is the use of the fixed-point method we saw above to
approximate a solution of the equation by finding a fixed point of the
function $f$.
For many functions $g$ and for sufficiently good
initial guesses for $x$, Newton's method
converges very rapidly to a solution of
$g(x)=0$.
In order to implement Newton's method as a
function,
we must first express the idea of
derivative. Note that
derivative, like average damping, is something that
transforms a function into another function. For instance, the derivative
of the function $x\mapsto x^3$ is the function
$x \mapsto 3x^2$. In general, if
$g$ is a function and
$dx$ is a small number, then the derivative
$Dg$ of $g$ is the
function whose value at any number $x$ is given
(in the limit of small $dx$) by
function deriv(g) {
return x => (g(x + dx) - g(x)) / dx;
}
along with the
declaration
および次の
宣言
を組み合わせます。
const dx = 0.00001;
Like
average_damp,
deriv is a
function
that takes a
function
as argument and returns a
function
as value. For example, to approximate the derivative of
$x \mapsto x^3$ at 5 (whose exact value is 75)
we can evaluate
function cube(x) { return x * x * x; }
deriv(cube)(5);
75.00014999664018
With the aid of deriv, we can express
Newton's method as a fixed-point process:
deriv を使えば、ニュートン法を不動点プロセスとして表現できます:
function newton_transform(g) {
return x => x - g(x) / deriv(g)(x);
}
function newtons_method(g, guess) {
return fixed_point(newton_transform(g), guess);
}
The
newton_transform
function
expresses the formula at the beginning of this section, and
newtons_method
is readily defined in terms of this. It takes as arguments a
function
that computes the function for which we want to find a zero, together with
an initial guess. For instance, to find the
square root of $x$, we can use
Newton's
method to find a zero of the function
$y\mapsto y^2-x$ starting with an initial guess
of 1.
This provides yet another form of the square-root
function:
We've seen two ways to express the square-root computation as an
instance of a more general method, once as a fixed-point search and once
using Newton's method. Since Newton's method was itself
expressed as a fixed-point process, we actually saw two ways to compute
square roots as fixed points. Each method begins with a function and finds a
fixed point of some transformation of the function. We can express this
general idea itself as a
function:
function fixed_point_of_transform(g, transform, guess) {
return fixed_point(transform(g), guess);
}
This very general
function
takes as its arguments a
function
g
that computes some function, a
function
that transforms g, and an initial guess.
The returned result is a fixed point of the transformed function.
Using this abstraction, we can recast the first square-root computation
from this section (where we look for a fixed point of the average-damped
version of $y \mapsto x/y$) as an instance of
this general method:
function sqrt(x) {
return fixed_point_of_transform(
y => x / y,
average_damp,
1);
}
Similarly, we can express the second square-root computation from this
section (an instance of
Newton's method that finds a fixed point of
the Newton transform of $y\mapsto y^2-x$) as
function sqrt(x) {
return fixed_point_of_transform(
y => square(y) - x,
newton_transform,
1);
}
We began section 1.3 with the
observation that compound
functions
are a crucial abstraction mechanism, because they permit us to express
general methods of computing as explicit elements in our programming
language. Now we've seen how higher-order
functions
permit us to manipulate these general methods to create further abstractions.
As programmers, we should be alert to opportunities to identify the
underlying abstractions in our programs and to build upon them and
generalize them to create more powerful abstractions. This is not to say
that one should always write programs in the most abstract way possible;
expert programmers know how to choose the level of abstraction appropriate
to their task. But it is important to be able to think in terms of these
abstractions, so that we can be ready to apply them in new contexts. The
significance of higher-order
functions
is that they enable us to represent these abstractions explicitly as
elements in our programming language, so that they can be handled just
like other computational elements.
In general, programming languages impose restrictions on the ways in which
computational elements can be manipulated. Elements with the fewest
restrictions are said to have
first-class status. Some of the rights and
privileges of first-class elements are:
JavaScript,
like other high-level
programming languages, awards
functions
full first-class status. This poses challenges for efficient
implementation, but the resulting gain in expressive power is
enormous.
Declare a function
double that takes a
function
of one argument as argument and returns a
function
that applies the original
function
twice. For example, if inc is a
function
that adds 1 to its argument, then
double(inc)
should be a
function
that adds 2. What value is returned by
Let $f$ and $g$ be
two one-argument functions. The
composition
$f$ after $g$ is
defined to be the function $x\mapsto f(g(x))$.
Declare a function
compose that implements composition. For
example, if inc is a
function
that adds 1 to its argument,
If $f$ is a numerical function and
$n$ is a positive integer, then we can form the
$n$th
repeated application of
$f$, which is defined to be the function whose
value at $x$ is
$f(f(\ldots(f(x))\ldots))$. For example, if
$f$ is the function
$x \mapsto x+1$, then the
$n$th repeated application of
$f$ is the function
$x \mapsto x+n$. If
$f$ is the operation of squaring a number, then
the $n$th repeated application of
$f$ is the function that raises its argument to
the $2^n$th power. Write a
function
that takes as inputs a
function
that computes $f$ and a positive integer
$n$ and returns the
function
that computes the $n$th repeated application of
$f$. Your
function
should be able to be used as follows:
The idea of
smoothing a function is an important concept in
signal processing. If $f$ is a function and
$dx$ is some small number, then the smoothed
version of $f$ is the function whose value at a
point $x$ is the average of
$f(x-dx)$, $f(x)$, and
$f(x+dx)$. Write a
function
smooth that takes as input a
function
that computes $f$ and returns a
function
that computes the smoothed $f$. It is sometimes
valuable to repeatedly smooth a function (that is, smooth the smoothed
function, and so on) to obtained the $n$-fold
smoothed function. Show how to generate the
$n$-fold smoothed function of any given function
using smooth and
repeated from
exercise 1.43.
We saw in section 1.3.3 that
attempting to compute square roots by naively finding a fixed point of
$y\mapsto x/y$ does not converge, and that this
can be fixed by average damping. The same method works for finding cube
roots as fixed points of the average-damped
$y\mapsto x/y^2$. Unfortunately, the process
does not work for
fourth roots—a single average damp is not enough to make a
fixed-point search for $y\mapsto x/y^3$
converge. On the other hand, if we average-damp twice (i.e., use the
average damp of the average damp of
$y\mapsto x/y^3$) the fixed-point search does
converge. Do some experiments to determine how many average damps are
required to compute
$n$th roots as a fixed-point search based upon
repeated average damping of $y\mapsto x/y^{n-1}$.
Use this to implement a simple
function
for computing $n$th roots using
fixed_point,
average_damp,
and the repeated
function
of exercise 1.43. Assume that any arithmetic
operations you need are available as primitives.
Several of the numerical methods described in this chapter are instances
of an extremely general computational strategy known as
iterative improvement. Iterative improvement says that, to compute something,
we start with an initial guess for the answer, test if the guess is good
enough, and otherwise improve the guess and continue the process using the
improved guess as the new guess. Write a
function
iterative_improve
that takes two
functions
as arguments: a method for telling whether a guess is good enough and a
method for improving a guess.
The function
iterative_improve
should return as its value a
function
that takes a guess as argument and keeps improving the guess until it is
good enough. Rewrite the sqrt
function
of section 1.1.7 and the
fixed_point
function
of section 1.3.3 in terms of
iterative_improve.
Observe that this
is an application whose function expression is itself
an application. Exercise 1.4 already
demonstrated the ability to form such applications, but that was only a toy
example. Here we begin to see the real need for such
applications—when applying a function
that is obtained as the value returned by a higher-order function.
Elementary calculus books
usually describe Newton's method in terms of the sequence of
approximations $x_{n+1}=x_n-g(x_n)/Dg(x_n)$.
Having language for talking about processes and using the idea of fixed
points simplifies the description of the method.
Newton's method does not
always converge to an answer, but it can be shown that in favorable cases
each iteration doubles the number-of-digits accuracy of the approximation
to the solution. In such cases,
Newton's method will converge much more rapidly than the half-interval
method.
The major implementation cost of first-class
functions
is that allowing
functions
to be returned as values requires reserving storage for a
function's free names
even while the
function
is not executing.
In the JavaScript implementation we will study in
section 4.1, these names are stored in the
function's
environment.