Section 17.1 Why we need adaptive methods
We have developed several methods of numerical integration: a family of Newton-Cotes methods, including Simpson's rule, and Gaussian integration methods. But the theory built so far has some weak points.
One weak point is the error estimation. When an integration rule has order \(d\) of accuracy, its error can be estimated by some multiple of the derivative of order \(d\text{.}\) For example, Simpson's rule is of order \(4\) and has error estimate involving the fourth derivative of \(f\text{:}\)
where, as with all such estimates, the maximum of the absolute value of the derivative is taken over the interval of integration. But a high-order derivative may be difficult to find, and if found, difficult to estimate. And even if it is easy to find and estimate, the output of (17.1.1) may be too conservative or completely useless.
Example 17.1.1. Error of Simpson's method.
Apply Simpson's method with \(n=2\) to the integral \(\int_1^{9} x^{3/2}\,dx\text{.}\) Find the actual error of the method and compare it to the estimate (17.1.1).
Here \(f(x) = x^{3/2}\) and \(h=(9-1)/2 = 4\text{,}\) so according to the formula (14.1.5)
The exact value of this integral is \(\frac{2}{5}(9^{5/2} - 1) = 484/5 = 96.8\text{.}\) So, the error is \(0.1618\text{,}\) relatively small.
To use the estimate (17.1.1) we need the fourth derivative of \(f\text{,}\) which is \((9/16)x^{-5/2}\text{.}\) The absolute value of this function is maximal at \(x=1\text{,}\) so \(\max|f^{(4)}| = 9/16\text{.}\) Also, \(h = (b-a)/2 = 4\text{.}\) So (17.1.1) says
Indeed, \(0.1618 \le 6.4\) but we see that the error estimate does not give the right idea of how large the error actually is.
It gets worse for the integral \(\int_0^{9} x^{3/2}\,dx\text{.}\) The exact value is \(97.2\) and Simpson's rule gives \(97.7756\) so the error is \(0.5756\text{.}\) But since the fourth derivative is \((9/16)x^{-5/2}\text{,}\) it is unbounded on the interval \([0, 9]\text{.}\) So the right hand side of (17.1.1) is infinite. It is a true statement that \(0.5756 \le \infty\) but it is not a useful statement.
The above situation is not uncommon. When one derives an error bound for some numerical method, one has to consider “the worst-case scenario”, with all possible errors accumulating in the worse possible way. In practice this rarely happens.
Moreover, formulas like (17.1.1) are difficult to implement in an algorithm. We know how to differentiate numerically (Chapter 12) but finding the absolute maximum of some function on an interval is not easy at all, as we will see later in the course.
The second weak point is that the function being integrated may have discontinuities or other special points, such as corners like \(f(x)=|x|\text{.}\) Numerical integration performs worse where the function is not smooth, and this requires using smaller step size around such features (but not elsewhere).
So we need an algorithm that estimates the error of numerical integration and adapts to any unusual features of the function.