Main Page | Namespace List | Class Hierarchy | Class List | File List | Class Members | File Members | Related Pages

Log-Log Interpolation

This section explains how the code fills in data points to permit linear-linear interpolation of log-log data. The input to this subroutine consists of data representable as a line on a log-log plot, together with a relative error tolerance $\epsilon$. The output is a possibly denser data set, for which linear interpolation agrees with log-log interpolation to within a relative error $\epsilon$.

For the implementation one starts by storing the data in a linked list. The function dd_list::LogLog_2_LinLin() expands a list to one for which linear-linear interpolation is valid to within a relative error of at most log_log_tol. We now explain how this is done.

Let us start with log-log interpolation between two data points $(x_0, u_0)$ and $(x_1, u_1)$ with $x_0 < x_1$. That is, for $x_0 < x < x_1$ we have the interpolation

\[ \log u = \log u_0 + \alpha (\log x - \log x_0) \]

with

\[ \alpha ={\frac{\log(u_1/u_0)}{\log(x_1/x_0)}}. \]

Clearly, we must have $x_0$, $u_0$, $x_1$, and $u_1$ all positive. Note that we may write these equations in the form

\[ u = u_0\left({\frac{x}{x_0}}\right)^\alpha \]

for $x_0 \le x \le x_1$. Our problem is that for a prescribed error tolerance $\epsilon$, we are to construct a piecewise-linear approximation to $u$ for which the relative error is never larger than $\epsilon$.

The analysis proceeds as follows. Observe that the second derivative of $u$ is

\[ {\frac{d^2 u}{dx^2}}= \alpha(\alpha - 1)\frac{u_0}{x_0^2} \left({\frac{x}{x_0}}\right)^{\alpha - 2}, \]

so that $u$ is convex if $\alpha > 1$ or $\alpha < 0$ and concave if $0 < \alpha < 1$. Consequently, the line from $(x_0, u_0)$ to $(x_1, u_1)$ lies above the graph of $u$ if $\alpha > 1$ or $\alpha < 0$ and below it if $0 < \alpha < 1$. A case with $d^2 u /dx^2 > 0$ is illustrated below:

lin_interp.gif

Linear interpolation.

It turns out to be convenient to label the first insertion point as $x_0 \xi$, and we shall use linear interpolation for $x_0 < x < x_0 \xi$. We therefore want $x_0 < x_0 \xi \le x_1$, or in other words, $1 < \xi \le x_1/x_0$. (If we decide that $x_0 \xi = x_1$, then we don't insert any new points.) The value of $u$ corresponding to $x = x_0 \xi$ is $u = u_0\xi^\alpha$. The line interpolating between the two points $(x_0, u_0)$ and $(x_0 \xi, u_0 \xi^\alpha)$ is

\[ v = u_0 \left[ 1 + \beta \left( {\frac{x}{x_0}} - 1 \right) \right] \]

with

\[ \beta = { \frac{(\xi^\alpha - 1)}{(\xi - 1)} }. \]

See the figure.

Our object is to select $\xi$ in such a way that the relative error

\[ E =\frac{v - u}{u} \]

satisfies $|E| \le \epsilon$ for $x_0 < x < x_0 \xi$. It is important to observe that $u$ and $v$, and consequently the relative error $E$ depend on $x$ only in the combination $x/x_0$.

The largest absolute relative error occurs at the point $\widetilde x$ where $dE/dx = 0$. This is the point at which

\[ u \frac{dv}{dx}= v\frac{du}{dx}. \]

It follows that this occurs where

\[ \frac{\beta u_0 u}{x_0} = \frac{\alpha uv}{x}. \]

This is equivalent to

\[ \beta u_0\frac{x}{x_0} = \alpha v. \]

Let us label the solution by $\widetilde x$. Upon substituting the definition of $v$ and solving for $\widetilde x$, we find that the largest absolute relative error occurs at

\[ \frac{\widetilde x}{x_0} = \frac{\alpha(\beta - 1)}{\beta (\alpha - 1)}. \]

Insertion of the definition of $\beta$ in this equation shows that

\[ \frac{\widetilde x}{x_0} = \frac{\alpha (\xi^\alpha - \xi)}{(\alpha - 1) (\xi^\alpha - 1)}. \]

Note that the right-hand side of the last equation takes the form $0/0$ when $\alpha = 0$ and $\alpha = 1$. That is no great surprise, since $u = u_0\left({\frac{x}{x_0}}\right)^\alpha$ reduces to $u = u_0$ for $\alpha = 0$ and to $u = u_0 x/ x_0$ for $\alpha = 1$, both being equations for which linear interpolation is exact. Consequently, $E = 0$ identically if $\alpha = 0$ or $\alpha = 1$, and we don't need to insert any points in those cases. We do need to be concerned that there might be severe loss of accuracy through subtraction of nearly equal numbers. We show that these singularities are removable, and we find the first two terms of the Taylor series for $\frac{\widetilde x}{x_0}$ expanded about $\alpha = 0$ and $\alpha = 1$.

For $\alpha \approx 0$ the singular $0/0$ behavior is due to the factor $\alpha/(\xi^\alpha - 1)$. In this case we therefore use the approximation

\[ \xi^{\alpha} = \exp\{\alpha \log \xi\} \approx 1 + \alpha \log \xi, \]

thereby using

\[ \frac{\widetilde x}{x_0} \approx \frac{1 - \xi + \alpha \log \xi}{(\alpha - 1) \log \xi} \]

for $\alpha \approx 0$.

Resolution of the $0/0$ singularity for $\alpha \approx 1$ follows a similar argument. We use the approximation

\[ \xi^\alpha = \xi \xi^{\alpha - 1} \approx \xi[1 + (\alpha - 1) \log \xi] \]

to show that for $\alpha \approx 1$ we have

\[ \frac{\widetilde x}{x_0} \approx \frac{\alpha \xi \log \xi}{\xi - 1 + (\alpha - 1) \xi \log \xi}. \]

Now that the theoretical background has been presented, we turn to the numerical algorithm. The first step is to test whether additional points need to be inserted into the interval $x_0 < x < x_1$. This is done in the following steps.

We give here one way to specify the first insertion point. The method actually used in the code is a little bit different, and it is given in the next section.

If we need to insert additional points, the first such insertion point is determined by the value of $\xi$ with the following properties: (i) $x_0 < x_0 \xi < x_1$ and (ii) upon substituting $\xi$ (depending on the value of $\alpha$) and computing $\max E$ we get $|\max E| = \epsilon$. For finding $\xi$ we use John Brent's zeroin root-finding code from http://netlib.ornl.gov. Note that we have two starting values which bracket the solution: (i) The guess $\xi = 1$ is too small because it gives $\max E = 0$, and (ii) we have already computed $\max E$ for $\xi = x_1/x_0$ and found this value of $\xi$ to be too large. In fact, there is an even better initial guess, $\xi = 1 + \delta$ with

\[ \delta = \sqrt{ \frac{8 \epsilon}{|\alpha(\alpha - 1)|} }, \]

which may be obtained from a power series expansion of $\max E$. This relation is derived in the last section of this document.

Once we have found the value of $\xi$ which gives $|\max E| = \epsilon$, we take $x_0 \xi$ as our first insertion point, and we set its $u$-value to $u_0 \xi^\alpha$.

We could now repeat the preceding algorithm with $x_0 \xi$ as the new left-hand end point, but there is an easier way. Because $x$ appears in our formulas only in the ratio $x/x_0$, we already know the result. It is that if $x_0 \xi^2 > x_1$, we don't need a second insertion point. Otherwise, the second insertion point is $x_0 \xi^2$ and its $u$-value is $u_0 \xi^{2\alpha}$.

By repeating this argument, we see that in order to determine the number of insertion points necessary, we find the smallest integer $N$ such that

\[ \xi^N \ge \frac{x_1}{x_0}. \]

Then we could use as the inserted points and their $u$-values

\[ (x_0 \xi, u_0 \xi^\alpha), (x_0 \xi^2, u_0 \xi^{2\alpha}), \cdots, (x_0 \xi^{N-1}, u_0 \xi^{(N-1)\alpha}). \]

The only reason we don't use this collection is that the final interval $x_0 \xi^{N-1} < x < x_1$ could be very short.

What we actually do is as follows. We compute the smallest valid integer $N$, and we set

\[ \eta = \left(\frac{x_1}{x_0}\right)^{1/N}. \]

We then use insertion points and $u$-values given by

\[ (x_0 \eta, u_0 \eta^\alpha), (x_0 \eta^2, u_0 \eta^{2\alpha}), \cdots, (x_0 \eta^{N-1}, u_0 \eta^{(N-1)\alpha}). \]

This collection gives a uniform subdivision with respect to $\log x$.

Let us now derive the approximation for the first insertion point $\xi = 1 + \delta$. This is obtained by substituting the first few terms of the power series expansion for $\max E$ into the equation $|\max E(\xi)| = \epsilon$.

It turns out that we need to carry terms out to the cubic in $\delta$,

\[ \xi^\alpha \approx 1 + \alpha\delta + \frac{\alpha(\alpha - 1)}{2} \delta^2 + \frac{\alpha(\alpha - 1)(\alpha - 2)}{6} \delta^3. \]

Then it follows that the maximum relative error is located approximately at

\[ \frac{\widetilde x}{x_0} \approx 1 + \frac{\delta}{2} - \frac{\alpha + 1}{12} \delta^2. \]

Let us now determine the values of $u$ and the linear approximation $v$ at this approximate worst point. Substituting yields the approximation to $\widetilde u$

\[ \widetilde u \approx u_0 \left\{ 1 + \frac{\alpha}{2} \delta + \frac{\alpha(\alpha - 5)}{24} \delta^2 \right\}. \]

Similarly, we conclude that

\[ \widetilde v \approx u_0 \left\{ 1 + \frac{\alpha}{2} \delta + \frac{\alpha(\alpha - 2)}{6} \delta^2 \right\}. \]

If we insert $\widetilde u$ and $\widetilde v$ into the equation for $\max E$, we find that

\[ \max E \approx \frac{\alpha(\alpha - 1)}{8} \delta^2. \]

The approximate solution of the equation $|\max E| = \epsilon$ now follows directly from this.

We have shown how to insert points into the data so that linear-linear interpolation of log-log data is accurate to within a relative error $\epsilon$. This is done by solving the equation $|\max E| = \epsilon$ for $\xi$, by taking $N$ and inserting the points. We have also seen that a good initial approximation to $\xi$ is given by

\[ \xi = 1 + \delta \]

with $\delta = \sqrt{ \frac{8 \epsilon}{|\alpha(\alpha - 1)|} }$.


Generated on Thu Sep 7 10:47:26 2006 for fete -- From ENDFB6 To ENDL by doxygen 1.3.4