Skip to main content

Section 19.4 Improving Euler's method

By analogy with Trapezoidal rule for numerical integration, one can try to improve the accuracy by using the average of both endpoints of an interval. That is, we would like to write

\begin{equation*} y_{k+1} = y_k + h\frac{f(t_k,y_k)+f(t_{k+1},y_{k+1})}{2} \end{equation*}

However, this means we need to know \(y_{k+1}\) in order to find \(y_{k+1}\text{.}\) So we first make a prediction for \(y_{k+1}\) and then use it to compute \(y_{k+1}\) again, more accurately. Namely, we use the original Euler's method to compute “predictor”

\begin{equation} \tilde y_{k+1}=y_k + h f (t_k,y_k)\label{eq-ode-predictor}\tag{19.4.1} \end{equation}

and then compute the “corrector”

\begin{equation} y_{k+1} = y_k + h\frac{f(t_k,y_k)+f(t_{k+1},\tilde y_{k+1})}{2} \label{eq-ode-corrector}\tag{19.4.2} \end{equation}

This is one of many predictor-corrector methods for solving ODE. This particular method goes by various names (modified Euler method, improved Euler method); I prefer to call it the trapezoidal method, because this name describes the logic of (19.4.2).

Modify Example 19.2.1 to use the trapezoidal method.

Answer
f = @(t, y) -t*y;   % or without -
h = 0.3;   %  or 0.1, 0.01 
t = 0:h:3;
y0 = 1;
y = y0*ones(size(t));

for k = 1:numel(t)-1
    pred = y(k) + h*f(t(k), y(k));
    y(k+1) = y(k) + h/2*(f(t(k), y(k)) + f(t(k+1), pred));
end

exact = exp(-t.^2 /2);   % or without -
plot(t, y, t, exact)
legend('trapezoid', 'exact')

Theoretical analysis of accuracy confirms the above observation that trapezoidal method is much more accurate. For the model equation \(y'=y\) it gives \(y_1 = 1+ h(1+1+h)/2 = 1+h+h^2/2\text{,}\) which matches exact solution \(e^h = 1+h+h^2/2+h^3/3\) up to \(h^3\) term. Therefore, the cumulative error is of order \(h^2\text{.}\)

A close relative is midpoint method. In it the trapezoidal corrector (19.4.2) is replaced by

\begin{equation} y_{k+1} = y_k + hf((t_k+t_{k+1})/2, (y_k + \tilde y_{k+1})/2)\label{eq-ode-corrector-midpoint}\tag{19.4.3} \end{equation}

where \(\tilde y_{k+1}\) is computed as in (19.4.1).

It is not so easy to improve accuracy further. There is a family of higher-order ODE solution methods (Runge-Kutta methods) of which the 4th order method is most popular; it involves several predictor-corrector steps. As with adaptive integration, one can compare the results of two methods of different orders in order to decide if the step size \(h\) should be reduced. This is what Matlab's ODE solvers do; for example, ode45 is a solver that combines a 4th order method with a 5th order method. We will use it later.