This section explains how the code fills in data points to permit linear-linear interpolation of log-log data. The input to this subroutine consists of data representable as a line on a log-log plot, together with a relative error tolerance . The output is a possibly denser data set, for which linear interpolation agrees with log-log interpolation to within a relative error
.
For the implementation one starts by storing the data in a linked list. The function dd_list::LogLog_2_LinLin() expands a list to one for which linear-linear interpolation is valid to within a relative error of at most log_log_tol
. We now explain how this is done.
Let us start with log-log interpolation between two data points and
with
. That is, for
we have the interpolation
with
Clearly, we must have ,
,
, and
all positive. Note that we may write these equations in the form
for . Our problem is that for a prescribed error tolerance
, we are to construct a piecewise-linear approximation to
for which the relative error is never larger than
.
The analysis proceeds as follows. Observe that the second derivative of is
so that is convex if
or
and concave if
. Consequently, the line from
to
lies above the graph of
if
or
and below it if
. A case with
is illustrated below:
Linear interpolation.
It turns out to be convenient to label the first insertion point as , and we shall use linear interpolation for
. We therefore want
, or in other words,
. (If we decide that
, then we don't insert any new points.) The value of
corresponding to
is
. The line interpolating between the two points
and
is
with
See the figure.
Our object is to select in such a way that the relative error
satisfies for
. It is important to observe that
and
, and consequently the relative error
depend on
only in the combination
.
The largest absolute relative error occurs at the point where
. This is the point at which
It follows that this occurs where
This is equivalent to
Let us label the solution by . Upon substituting the definition of
and solving for
, we find that the largest absolute relative error occurs at
Insertion of the definition of in this equation shows that
Note that the right-hand side of the last equation takes the form when
and
. That is no great surprise, since
reduces to
for
and to
for
, both being equations for which linear interpolation is exact. Consequently,
identically if
or
, and we don't need to insert any points in those cases. We do need to be concerned that there might be severe loss of accuracy through subtraction of nearly equal numbers. We show that these singularities are removable, and we find the first two terms of the Taylor series for
expanded about
and
.
For the singular
behavior is due to the factor
. In this case we therefore use the approximation
thereby using
for .
Resolution of the singularity for
follows a similar argument. We use the approximation
to show that for we have
Now that the theoretical background has been presented, we turn to the numerical algorithm. The first step is to test whether additional points need to be inserted into the interval . This is done in the following steps.
and (with as defined by Eq.~eq:6})
The largest relative error is
We give here one way to specify the first insertion point. The method actually used in the code is a little bit different, and it is given in the next section.
If we need to insert additional points, the first such insertion point is determined by the value of with the following properties: (i)
and (ii) upon substituting
(depending on the value of
) and computing
we get
. For finding
we use John Brent's
zeroin
root-finding code from http://netlib.ornl.gov. Note that we have two starting values which bracket the solution: (i) The guess is too small because it gives
, and (ii) we have already computed
for
and found this value of
to be too large. In fact, there is an even better initial guess,
with
which may be obtained from a power series expansion of . This relation is derived in the last section of this document.
Once we have found the value of which gives
, we take
as our first insertion point, and we set its
-value to
.
We could now repeat the preceding algorithm with as the new left-hand end point, but there is an easier way. Because
appears in our formulas only in the ratio
, we already know the result. It is that if
, we don't need a second insertion point. Otherwise, the second insertion point is
and its
-value is
.
By repeating this argument, we see that in order to determine the number of insertion points necessary, we find the smallest integer such that
Then we could use as the inserted points and their -values
The only reason we don't use this collection is that the final interval could be very short.
What we actually do is as follows. We compute the smallest valid integer , and we set
We then use insertion points and -values given by
This collection gives a uniform subdivision with respect to .
Let us now derive the approximation for the first insertion point . This is obtained by substituting the first few terms of the power series expansion for
into the equation
.
It turns out that we need to carry terms out to the cubic in ,
Then it follows that the maximum relative error is located approximately at
Let us now determine the values of and the linear approximation
at this approximate worst point. Substituting yields the approximation to
Similarly, we conclude that
If we insert and
into the equation for
, we find that
The approximate solution of the equation now follows directly from this.
We have shown how to insert points into the data so that linear-linear interpolation of log-log data is accurate to within a relative error . This is done by solving the equation
for
, by taking
and inserting the points. We have also seen that a good initial approximation to
is given by
with .