9.3 Trapezoidal rule method

The trapezoidal rule is a numerical method for approximating solutions to ODEs whose error decays faster in \(h\) than that of Euler’s method. Its heuristic derivation is based on the trapezoidal rule for approximating definite integrals like \(\int_{t_{0}}^{t_{\max}}f(t)dt\) by sums.

This is Sparta [sec:ch9_trap_rule_method].

\(\text{ }\) Trapezoidal rule for Riemann sums. Recall from ([eq:ch9_def_int_riemann_err]) that the error bound in approximating an integral by the left Riemann sum \(h\sum_{n}f(t_{n})\) is linear in \(h.\) More precisely, \[\left|\int_{t_{0}}^{t_{\max}}f(t)dt-h\sum_{n=0}^{N-1}f(t_{n})\right|\le ch,\tag{9.24}\] for \(c=(t_{\max}-t_{0})\sup_{t\in[t_{0},t_{\max}]}\left|f''(t)\right|\) and \(t_{\max}=t_{0}+Nh.\) The left Riemann sum approximation arises from the approximation \(\int_{t_{n}}^{t_{n+1}}f(t)dt\approx hf(t_{n})\) for a small interval \([t_{n},t_{n+1}].\) The trapezoidal rule replaces this by the approximation \(\int_{t_{n}}^{t_{n+1}}f(t)dt\approx\int_{t_{n}}^{t_{n+1}}g(t)dt,\) where \(g(t)\) the linear interpolation such that \(g(t_{n})=f(t_{n}),g(t_{n+1})=f(t_{n+1})\) - see Figure Figure 9.2. It is easy to prove that \(\int_{t_{n}}^{t_{n+1}}g(t)dt=\frac{h}{2}\left(f(t_{n})+f(t_{n+1})\right)\) (exercise: prove it). Thus the trapezoidal rule approximates the integral over \([t_{n},t_{n+1}]\) as \[\int_{t_{n}}^{t_{n+1}}f(t)dt\approx h\frac{f(t_{n})+f(t_{n+1})}{2}.\tag{9.25}\] The approximation for the integral over the whole interval is then \[\int_{t_{0}}^{t_{\max}}f(t)dt\approx h\sum_{n=0}^{N-1}\frac{f(t_{n})+f(t_{n+1})}{2}.\tag{9.26}\]

Figure 9.2: The reason ([eq:ch9_trapezoidal_rule_integral]) is called the trapezoidal rule. The shape bounded by the red line, the dashed lines and the segment \([a,b]\) is a trapezoid. The average \(\frac{f(a)+f(b)}{2}\) is precisely the area of this trapezoid. Thus the trapezoidal rule is equivalent to replacing \(f\) by the piece-wise linear interpolating function that passes through \(f(t_{n})\) for each \(n,\) and then integrating. The left Riemann rule corresponds to replacing \(f(t)\) by the piece-wise constant function equal to \(f(t_{n})\) on the interval \([t_{n},t_{n+1}).\) The interpolating function of the trapezoidal rule is clearly a better approximation of \(f\) than the piece-wise constant function of the left Riemann sum (for a continuous \(f\)), explaining the lower error of the trapezoidal rule.

The error in this approximation decays faster than linearly in \(h,\) i.e. faster than the r.h.s. of ([eq:ch9_trap_recall_left_riemann_err]) (provided \(f\) is twice continuously differentiable), as we now argue. Let us first consider the error for a single interval: \[\left|\int_{t_{n}}^{t_{n+1}}f(t)dt-h\frac{f(t_{n})+f(t_{n+1})}{2}\right|.\] To this end let \(G(t)=\int_{t_{n}}^{t}f(s)ds,\) so that the first term above is \(G(t_{n+1}),\) and note that by Taylor’s theorem \[G(t_{n+1})=G(t_{n})+hG'(t_{n})+\frac{h^{2}}{2}G''(t_{n})+O\left(h^{3}\sup_{t\in[t_{0},t_{\max}]}\left|G'''(t)\right|\right),\tag{9.27}\] if \(G\) is three times continuously differentiable, where we have used big \(O\) notation to compactly express that there is a universal constant \(c>0\) such that \[\left|G(t_{n+1})-\left\{ G(t_{n})+hG'(t_{n})+\frac{h^{2}}{2}G''(t_{n})\right\} \right|\le ch^{3}\sup_{t\in[t_{0},t_{\max}]}\left|G'''(t)\right|\] for any such \(G.\) By the definition of \(G\) this implies \[G(t_{n+1})=hf(t_{n})+\frac{h^{2}}{2}f'(t_{n})+O\left(h^{3}\sup_{t\in[t_{0},t_{\max}]}\left|f''(t)\right|\right)\tag{9.28}\] for all \(h>0\) and twice continuously differentiable functions \(f.\) Similarly under the same assumption \[f(t_{n+1})=f(t_{n})+hf'(t_{n})+O\left(h^{2}\sup_{t\in[t_{0},t_{\max}]}\left|f''(t)\right|\right).\] Thus \[h\frac{f(t_{n})+f(t_{n+1})}{2}=hf(t_{n})+\frac{h^{2}}{2}f'(t_{n})+O\left(h^{3}\sup_{t\in[t_{0},t_{\max}]}\left|f''(t)\right|\right).\tag{9.29}\] Comparing the r.h.s.s of ([eq:ch9_trapezoidal_integral_G-1]) and ([eq:ch9_trapezoidal_integral_heuristic_avg]) we see that the first two terms match, so that \[\int_{t_{n}}^{t_{n+1}}f(t)dt-h\frac{f(t_{n})+f(t_{n+1})}{2}=O\left(h^{3}\sup_{t\in[t_{0},t_{\max}]}\left|f''(t)\right|\right).\] The error bound for one interval is thus cubic in \(h\) for the trapezoidal rule, as opposed to quadratic for the left Riemann-sum (recall ([eq:ch9_def_int_riemann_err_one_step])). To bound the error for whole interval \([t_{0},t_{\max}]\) one simply sums this estimate over \(n=0,\ldots,N-1\) and uses that \(Nh^{3}=\frac{t_{\max}-t_{0}}{h}h^{3}=\left(t_{\max}-t_{0}\right)h^{2}\) to obtain \[\int_{t_{0}}^{t_{\max}}f(t)dt-h\sum_{n=0}^{N-1}\frac{f(t_{n})+f(t_{n+1})}{2}=O\left(h^{2}(t_{\max}-t_{0})\sup_{t\in[t_{0},t_{\max}]}\left|f''(t)\right|\right)\tag{9.30}\] The r.h.s. is quadratic in \(h,\) as opposed to linear like in ([eq:ch9_def_int_riemann_err]). That is, the trapezoidal rule for approximating definite integrals has a global error bound which decays quadratically in \(h,\) faster than the linear in \(h\) bound for the left Riemann sum. Assuming for simplicity that the prefactor of the power of \(h\) is equal to one in both cases, this means that to achieve an error bounded by \(\frac{1}{100}\) the left Riemann sum needs \(h=\frac{1}{100},\) while the trapezoidal rule sum needs only \(h=\frac{1}{10}\): or in other words, the left Riemann sum needs \(100\) summands to satisfy such a bound, while the trapezoidal rule sum needs only \(10.\)

\(\text{ }\) Trapezoidal rule method for ODEs. Inspired by this, we can build a numerical method for first order ODEs by applying the approximation ([eq:ch9_trap_heuristic_derivation_approx_one_interval]) to \(y_{n+1}-y_{n}=\int_{t_{n}}^{t_{n+1}}F(t,y(t))dt,\) yielding \[\int_{t_{n}}^{t_{n+1}}F(t,y(t))dt\approx h\frac{F(t_{n},y(t_{n}))+F(t_{n+1},y(t_{n+1}))}{2},\] and thus \[y_{n+1}=y_{n}+h\frac{F(t_{n},y_{n})+F(t_{n+1},y_{n+1})}{2}.\tag{9.31}\] The last equation is the trapezoidal rule method for numerically approximating solutions to ODEs. Note that it differs from Euler’s method ([eq:ch9_euler_method_def_one_step]) in that it specifies \(y_{n+1}\) in terms of \(y_{n}\) implicitly. That is, \(y_{n+1}\) is defined as a solution of the equation \[y=y_{n}+h\frac{F(t_{n},y_{n})+F(t_{n+1},y)}{2}.\tag{9.32}\] The next plot compares the exact solution of an ODE to the approximation by the trapezoid rule method and Euler’s method for the same step-size.

Figure 9.3: Comparison of approximations of trapezoidal rule method and Euler’s method for the ODE \(y'=y.\)

We see that in this case the error of the trapezoidal rule method is much smaller. Below we will prove error bounds for the trapezoidal rule method that decay faster in \(h\) than those for Euler’s method. One should keep in mind however, that the comparison is not entirely fair: for the same step-size \(h\) the trapezoidal rule method uses more computational resources, since it needs to solve the implicit equation ([eq:ch9_trapezoidal_heu_implicit_eq]) at each step.

\(\text{ }\) Existence of solution. A technical point: for general \(F,y_{n},h\) there is no guarantee that a solution of ([eq:ch9_trapezoidal_heu_implicit_eq]) even exists. And even if a solution exists, there may be several solutions: this can be dealt with by specifying \(y_{n+1}\) as the solution closest to \(y_{n}\) (breaking any ties in some arbitrary way). Some assurance regarding the solutions of ([eq:ch9_trapezoidal_heu_implicit_eq]) is given by the implicit function theorem. The version relevant here deals with a parametric family of equations \(G(h,y)=0\) for a vector \(y\in\mathbb{R}^{d},\) parameterized by \(h\in\mathbb{R}.\) If for some \(h_{0}\) a solution \(y_{0}\) is known, i.e. if \(G(h_{0},y_{0})=0,\) it guarantees the existence of solutions \(y\) of \(G(h,y)=0\) for \(h\) in a small neighborhood of \(h_{0},\) provided (1) \(G\) is continuously differentiable in \((h,y)\) (2) the \(d\times d\) Jacobian matrix15 \(D_{y}G(h_{0},y_{0})\) is non-degenerate. More precisely, if these hypotheses hold, it states that there exists a neighborhood of \((h_{0},y_{0})\) and a function \(y(h),\) such that for \(h,y\) in the neighborhood it holds that \(G(h,y)=0\) iff \(y=y(h).\)

Assuming \(F\) is continuously differentiable and given \(t_{n},y_{n}\) we can define \[G(h,y)=y-y_{n}-h\frac{F(t_{n},y_{n})+F(t_{n+1},y)}{2}.\] This function satisfies \(G(0,y_{n})=0,\) and \(D_{y}G(h,y)=I-\frac{h}{2}D_{y}F(t_{n+1},y)\) implying \(D_{y}G(0,y_{n})=I,\) which is clearly non-degenerate. Therefore for each \(n\) there exists by the implicit function theorem some small enough \(c>0\) such that for any \(h\in(0,c)\) there exists a unique solution \(y\in\mathbb{R}^{d}\) of \(G(h,y)=0\) closest to \(y_{n},\) i.e. a closest solution of ([eq:ch9_trapezoidal_heu_implicit_eq]). Reassuringly, this means that for small enough \(h>0\) the implicit equation ([eq:ch9_trapezoidal_heu_implicit_eq]) defines a unique \(y_{n+1}.\)

Unfortunately, the standard version of the implicit function theorem does not provide a lower bound for the constant \(c>0\): this means that it cannot guarantee that for any single \(h>0\) the unique closest solution exists simultaneously for all steps \(n=0,1,2,\ldots\) of the method. The next section will deal with existence guarantees for all steps simultaneously.

\(\text{ }\) Formal definition. Let us now formally define the trapezoidal rule method.

Definition 9.9 • Trapezoidal rule method

Given \(d\ge1,\) \(V=\mathbb{R}^{d}\) or \(V=\mathbb{C}^{d},\) \(t_{0},t_{\max}\in\mathbb{R},t_{0}<t_{\max}\) , a function \(F:[t_{0},t_{\max}]\times V\to V,\) a \(y_{0}\in V\) and \(h>0,\) the trapezoidal rule method defines the approximations \(y_{n},1\le n\le\frac{t_{\max}-t_{0}}{h}\) as follows:

For \(n\ge0\) it defines \(y_{n+1}\) as the solution of \[y=y_{n}+h\frac{F(t_{n},y_{n})+F(t_{n+1},y)}{2}\tag{9.34}\] closest to \(y_{n},\) provided such a solution exists (breaking ties in some arbitrary way if there are several closest solutions). If no solution exists it defines \(y_{n+1}=y_{n}.\)

Remark 9.10

a) The part about breaking ties and the case when no solution exists are absolutely unimportant in practice, they are just there to make the definition formally complete.

b) Defined like the the trapezoidal rule is indeed a time-stepping method as defined in Definition 9.2, with \(Y_{n}(F,h,t_{0},y_{0},y_{1},\ldots,y_{n})\) equaling the solution \(y_{n+1}.\)

c) As defined here this method is strictly speaking impossible to implement on a computer, since it defines \(y_{n+1}\) as the exact solution of ([eq:ch9_trapezoidal_rule_implicit_eq]). In practice, the solution can not be computed exactly (except in simple cases), only approximated. In the next section we consider the combination of the trapezoidal rule with an approximation algorithm for solving ([eq:ch9_trapezoidal_rule_implicit_eq]).

Recall the concept of order of a time-stepping method from Definition 9.3.
Theorem 9.11 The trapezoidal rule method of Definition 9.9 is of order \(p=2.\)

Proof. Recall that \(V=\mathbb{R}^{d}\) or \(V=\mathbb{C}^{d},\) and \(F\) is assumed to be analytic. For the present method the l.h.s. of ([eq:ch9_time_stepping_method_order_local_err]) equals \[\left|y(t_{n+1})-y_{n+1}\right|,\] where \(y_{n+1}\) is a solution to \[y=y(t_{n})+h\frac{F(t_{n},y(t_{n}))+F(t_{n+1},y)}{2}\tag{9.35}\] closest to \(y_{n}\) (if such a solution exists).

Let \[G:[0,1]\times V\to V\] be defined by \[G(h,y)=y-y_{n}-h\frac{F(t_{n},y(t_{n}))+F(t_{n+1},y_{n+1})}{2}.\] Since \(F\) is analytic \(G(h,y)\) is infinitely differentiable. Furthermore \(G(0,y_{n})=0,\) and \(D_{y}G(0,y_{n})=I,\) so the implicit function theorem implies that there exists \(0<\overline{c}_{1}\le1,\) \(\overline{c}_{2}>0\) and a continuous function \(y:[0,\overline{c}_{1}]\to V\) such that \[\begin{array}{c} h\in[0,\overline{c}_{1}],\left|y-y_{n}\right|\le\overline{c}_{2}\quad\quad\text{and}\quad\quad G(h,y)=0\\ \iff\\ h\in[0,\overline{c}_{1}],\left|y-y_{n}\right|\le\overline{c}_{2}\quad\quad\text{and}\quad\quad y=y(h). \end{array}\] This implies that for \(h\in(0,\overline{c}_{1}]\) there is indeed a unique solution \(y_{n+1}\) of ([eq:ch9_thm_trap_order_2_implicit_eq]) closest to \(y_{n}.\)

Consider now the error \[\left|y(t_{n+1})-y_{n+1}\right|=\left|y(t_{n+1})-\left\{ y(t_{n})+h\frac{F(t_{n},y(t_{n}))+F(t_{n+1},y_{n+1})}{2}\right\} \right|.\tag{9.36}\] Let \(L\) denote the Lipschitz constant of \(y\to F(t_{n+1},y)\) on \(\left\{ y:\left|y-y_{n}\right|\le\overline{c}_{2}\right\} ,\) which must be finite since the latter set is compact and \(y\to F(t_{n+1},y)\) is continuously differentiable. Then ([eq:ch9_trap_order_proof_error_sol_sub]) is at most \[\left|y(t_{n+1})-\left\{ y(t_{n})+h\frac{F(t_{n},y(t_{n}))+F(t_{n+1},y(t_{n+1}))}{2}\right\} \right|+\frac{h}{2}L\left|y(t_{n+1})-y_{n+1}\right|.\tag{9.37}\] Since \(y(t)\) satisfies the ODE ([eq:ch9_numerical_time_stepping_def_IVP]), the first term of ([eq:ch9_trap_order_proof_error_after_lipschitz]) equals \[\left|y(t_{n+1})-y(t_{n})-h\frac{y'(t_{n})+y'(t_{n+1})}{2}\right|.\tag{9.38}\] Now since \(F\) is analytic so is \(y\) (recall Lemma 9.5), so we can estimate \[y(t_{n+1})=y(t_{n})+hy'(t_{n})+\frac{h^{2}}{2}y''(t_{n})+O(h^{3}\sup_{t\in I}\left|y'''(t)\right|),\tag{9.39}\] where the supremum is of a continuous function over a compact set and therefore finite. Similarly \[y'(t_{n+1})=y'(t_{n})+hy''(t_{n})+O(h^{2}\sup_{t\in I}\left|y'''(t)\right|).\tag{9.40}\] Plugging these into ([eq:ch9_trap_order_proof_error_after_lipschitz_F_to_dy]) shows that ([eq:ch9_trap_order_proof_error_after_lipschitz_F_to_dy]) is \(O(h^{3}\sup_{t\in I}\left|y'''(t)\right|),\) i.e. that there exists a constant \(c_{3}\) such that \[\left|y(t_{n+1})-y(t_{n})-h\frac{y'(t_{n})+y'(t_{n+1})}{2}\right|\le c_{3}h^{3}\tag{9.41}\] for all \(h>0\) s.t. \(t_{0}+(n+1)h\le t_{\max}\) (we have hidden the the supremum over \(I\) in the constant \(c_{3},\) this constant thus depends on \(y\)). Recalling ([eq:ch9_trap_order_proof_error_sol_sub])-([eq:ch9_trap_order_proof_error_after_lipschitz]) this gives that \[\left|y(t_{n+1})-y_{n+1}\right|\le c_{3}h^{3}+\frac{h}{2}L\left|y(t_{n+1})-y_{n+1}\right|.\] Provided \(h\le\frac{1}{L}\) so that \(\frac{h}{2}L\le\frac{1}{2}\) this implies \[\left|y(t_{n+1})-y_{n+1}\right|\le2ch^{3}.\] Thus setting \(c_{1}=\min(\overline{c}_{1},\frac{1}{L}\)) and \(c_{2}=2c\) we have that ([eq:ch9_time_stepping_method_order_local_err]) indeed holds with \(p=2\) for all \(h\in(0,c_{1}).\)
Thus the order of the trapezoidal method is \(p=2\) compared to \(p=1\) for Euler’s method (recall Lemma 9.6). The proof that the trapezoidal method is globally convergent has to wait until we have studied the properties of solutions to ([eq:ch9_trapezoidal_rule_implicit_eq]) in more detail in the next section. We will then see that the global error of the trapezoidal rule method decays quadratically as \(h^{2},\) compared to linearly as \(h\) for Euler’s method (see ([eq:ch9_eulers_method_convergent_global_error])).

\(\text{ }\) Generalization of Euler and Trapezoidal rule methods. Both Euler’s method and the trapezoidal rule method are of the form \[y_{n+1}=y_{n}+h\left(\theta F(t_{n},y_{n})+(1-\theta)F(t_{n+1},y_{n+1})\right)\tag{9.42}\] for \(\theta\in[0,1].\) Euler’s method corresponds to \(\theta=1\) and the trapezoidal rule method to \(\theta=\frac{1}{2}.\) A general method of this type is called a theta method. For \(\theta=1\) (i.e. Euler) the method is explicit, for all other \(\theta\) it is implicit.

Beyond the two we have already seen, the most important theta method corresponds to \(\theta=1,\) i.e. \[y_{n+1}=y_{n}+hF(t_{n+1},y_{n+1}),\] where \(y_{n}\) doesn’t appear at all in the second term of the r.h.s. This method can variously be referred to as the implicit Euler method or the backward Euler method. By constrast Euler’s method with \(\theta=1\) (i.e. ([eq:ch9_euler_method_def_one_step])) is sometimes called the explicit Euler method or the forward Euler method.

9.4 Fixed-point equations

In this section we

\(\text{ }\)General class of fixed-point equations. To this end we investigate implicit equations of the form \[w=hg(w)+\beta\tag{9.44}\] where \(h>0,\) \(\beta\in V\) and \(g:V\to V\) for \(V=\mathbb{R}^{d}\) or \(V=\mathbb{C}^{d}.\) The implicit equation ([eq:ch9_trapezoidal_rule_implicit_eq]) of the trapezoidal rule method is of this form with \(\beta=y_{n}\) and \(g(w)=\frac{1}{2}\left(F(t_{n},y_{n})+F(t_{n+1},w)\right).\)

\(\text{ }\)Iterative solution The obvious iterative algorithm to solve this equation is to start with some \(w_{0},\) and compute \[w_{n+1}=hg(w_{n})+\beta,n\ge0.\] For “good” \(g,\beta,h\) one may hope that this sequence converges as \(n\to\infty.\) If so, the r.h.s. converges to its limit \(w,\) and if \(g\) is continuous the l.h.s. converges to \(hg(w)+\beta,\) so the limit \(w\) satisfies \(w=hg(w)+\beta\) and is thus a solution of ([eq:ch9_fixed_point_eq_general_eq]).

Since a computer can only make a finite number of computations, a practical implementation must have a rule for how to stop the iteration. The last value of \(w_{n}\) is then outputted as an approximation of the true solution. For instance on can stop after a fixed number of steps \(N,\) e.g. \(N=3,N=10,N=100,N=1000.\) Alternatively one can specify a tolerance level \(\varepsilon>0,\) e.g. \(\varepsilon=10^{-1},10^{-2},10^{-3},\ldots\) and stop once \[\left|w_{n}-hg(w_{n})-\beta\right|\le\varepsilon.\]

\(\text{ }\)Convergence guarantee. The next theorem gives conditions under which the sequence \(w_{n}\) converges. It is essentially a version of the famous Banach fixed-point theorem.

Theorem 9.12

Let \(d\ge1\) and \(V=\mathbb{R}^{d}\) or \(V=\mathbb{C}^{d}.\) For \(w\in V\) and \(r\ge0\) define the closed ball \(B(w,r)=\left\{ v\in V:\left|v-w\right|\le r\right\} .\) Let \(g:V\to V.\) Let \(w_{0}\in V,\beta\in V,\rho>0,h>0\) and define the sequence \[w_{n+1}=hg(w_{n})+\beta,n\ge0.\tag{9.46}\] Assume that

  1. \(g\) is continuous on \(B(w_{0},\rho),\) and

  2. for all \(u,v\in B(w_{0},\rho)\) it holds that \(\left|g(u)-g(v)\right|\le\frac{\lambda}{h}\left|u-v\right|,\) and

  3. \(\left|w_{1}-w_{0}\right|\le(1-\lambda)\rho.\)

Then

  1. \(w_{n}\in B(w_{0},\rho)\) for all \(n\ge0,\) and

  2. the sequence \(w_{n}\) is convergent, and \(w_{n}\to w_{\infty}\in B(w_{0},\rho),\) and

  3. the limit \(w_{\infty}\) solves \[w=hg(w)+\beta,\tag{9.47}\] and is the unique solution to ([eq:ch9_banach_fixed_point_fixed_point_eq]) in \(B(w_{0},\rho\)).

Proof. We first prove (1). Assume for contradiction that this is not the case, and let \(n\) be s.t. \(w_{n+1}\) is the first iterate not in \(B(w_{0},\rho).\) That is, \[w_{m}\in B(w_{0},\rho)\text{ for all }m\le n\text{ and }w_{n+1}\notin B(w_{0},\rho).\tag{9.48}\] Using the definition ([eq:ch9_banach_fixed_point_thm_iteration]) and hypothesis (b) it holds that \[\left|w_{n+1}-w_{n}\right|=\left|hg(w_{n})-hg(w_{n-1})\right|=h\left|g(w_{n})-g(w_{n-1})\right|\le\lambda\left|w_{n}-w_{n-1}\right|.\tag{9.49}\] This inequality can be iterated since we have assumed that \(w_{m}\in B(w_{0},\rho)\) for all \(m\le n,\) so we obtain using also hypothesis (c) that \[\left|w_{m+1}-w_{m}\right|\le\lambda^{m}\left|w_{1}-w_{0}\right|\le\lambda^{m}(1-\lambda)\rho\quad\text{for all }\quad m\le n.\tag{9.50}\] Thus since \(0<\lambda<1\) \[\left|w_{n+1}-w_{0}\right|\le\sum_{m=0}^{n}\left|w_{m+1}-w_{m}\right|\le\sum_{m=0}^{n}\lambda^{m}(1-\lambda)\rho\le\sum_{m=0}^{\infty}\lambda^{n}(1-\lambda)\rho=\rho.\] But this contradicts ([eq:ch9_banach_fixed_point_thm_assume_for_contradiction]). Thus \(w_{n}\in B(w_{0},\rho)\) for all \(n\ge0,\) proving (1).

To prove (2), note that since \(w_{n}\in B(w_{0},\rho)\) for all \(n\ge0\) the inequality ([eq:ch9_banach_fixed_point_thm_increment_bound]) holds for all \(m,\) i.e. \[\left|w_{n+1}-w_{n}\right|\le\lambda^{n}(1-\lambda)\rho\quad\text{ for all }\quad n\ge0.\] Thus \[\sum_{n=0}^{\infty}\left|w_{n+1}-w_{n}\right|\le\sum_{m=0}^{\infty}\lambda^{n}(1-\lambda)\rho<\infty,\] so the sum \(\sum_{n=0}^{\infty}(w_{n+1}-w_{n})\) is absolutely convergent. Therefore there exists a \(w'\in V\) such that \[w'=\lim_{m\to\infty}\sum_{n=0}^{m}\left(w_{n+1}-w_{n}\right)=\sum_{n=0}^{\infty}(w_{n+1}-w_{n}).\] Let \(w_{\infty}=w'+w_{0}.\) Since \(\sum_{n=0}^{m}(w_{n+1}-w_{n})=w_{m+1}-w_{0}\) we have \[\lim_{n\to\infty}w_{n}=w_{\infty}.\] Since \(w_{n}\in B(w_{0},\rho)\) for all \(n\ge0\) and the ball \(B(w_{0},\rho)\) is closed, then also \(w_{\infty}\in B(w_{0},\rho).\) This proves (2).

Turning to (3), since \(g\) is continuous it holds that \[\lim_{n\to\infty}\left(hg(w_{n})+\beta\right)\to hg(w_{\infty})+\beta,\] which proves that \[w_{\infty}=hg(w_{\infty})+\beta.\tag{9.51}\] Lastly, assume that \(v\) is a different solution to ([eq:ch9_banach_fixed_point_fixed_point_eq_in_proof]) in \(B(w_{0},\rho).\) Then by hypothesis (b) \[\left|w_{\infty}-v\right|=h\left|g(w_{\infty})-g(v)\right|\le\lambda\left|w_{\infty}-v\right|<\left|w_{\infty}-v\right|,\] which is a contradiction, proving the uniqueness of the solution \(w_{\infty}.\)

\(\text{ }\)Convergence of trapezoidal rule method. We can now use the previous theorem to prove the global convergence of the trapezoidal rule method. Recall Definition 9.7 of a convergent method. Recall further from ([eq:ch9_time_stepping_method_def_error]) the definition of the error \(e_{n}\) for the IVP ([eq:ch9_numerical_time_stepping_def_IVP]) corresponding to a given \(F.\)
Theorem 9.13 The trapezoidal rule method from Definition 9.9 is convergent.

Furthermore for any analytic \(F\) and solution \(y\) to the IVP ([eq:ch9_numerical_time_stepping_def_IVP]) there exist constants \(c_{1},c_{2}>0\) such that if \(0<h\le c_{1}\) then \[\max_{0\le n\le\frac{t_{\max}-t_{0}}{h}}\left|e_{n}\right|\le c_{2}h^{2}.\tag{9.53}\]

Remark 9.14

Thus we have finally proved in ([eq:ch9_trap_rule_convergent_thm_error_bound]) that not only is the trapezoidal rule method of order \(p=2,\) but also the global error decays quadratically in \(h^{2}.\)

Proof. Since the r.h.s. of ([eq:ch9_trap_rule_convergent_thm_error_bound]) tends to zero for \(h\downarrow0\) it implies that the method is convergent. We thus proceed to prove ([eq:ch9_trap_rule_convergent_thm_error_bound]).

Fix an analytic \(F\) and the corresponding solution \(y(t)\) of the IVP ([eq:ch9_numerical_time_stepping_def_IVP]), also analytic. We must show for \(h>0\) small enough the existence of solutions to the implicit equation ([eq:ch9_trapezoidal_rule_implicit_eq]) for all \(n\) s.t. \(t_{0}+nh\le t_{\max},\) and then the global error bound ([eq:ch9_trap_rule_convergent_thm_error_bound]).

Let \[M=\sup_{t\in I}\left|y(t)\right|,\] which is finite since \(y(t)\) is analytic and \(I\) compact. We first prove the existence of the solution to ([eq:ch9_trapezoidal_rule_implicit_eq]) and the error bound ([eq:ch9_trap_rule_convergent_thm_error_bound]) for \(n\) s.t. \(\left|y_{n}\right|\le2M.\) We then argue by contradiction that \(\left|y_{n}\right|\le2M\) in fact holds for all \(n.\) Let \[\rho=1\quad\quad\text{and}\quad\quad\lambda=\frac{1}{2}\] (to be used when applying Theorem 9.12) and \[L=\sup_{t\in I,|y|\le2M+\rho}\left|\partial_{y}F(t,y)\right|\quad\quad\text{and}\quad\quad M'=\sup_{t\in I,|y|\le2M+\rho}\left|F(t,y)\right|\tag{9.54}\] which are finite since \(F\) is analytic.

Let \[\overline{c}_{1}=\min\left(\frac{(1-\lambda)\rho}{M'},\frac{2\lambda}{L}\right)=\min\left(\frac{1}{2M'},\frac{1}{L}\right),\tag{9.55}\] and assume that \[0\le h\le\overline{c}_{1}.\tag{9.56}\]

For each \(n,\) consider the equation ([eq:ch9_trapezoidal_rule_implicit_eq]), i.e. \[y=y_{n}+h\frac{F(t_{n},y_{n})+F(t_{n+1},y)}{2}.\tag{9.57}\] To apply Theorem 9.12, let \[g(y)=\frac{F(t_{n},y_{n})+F(t_{n+1},y)}{2}\quad\quad\text{ and }\quad\quad\beta=y_{n}.\] Then the equation ([eq:ch9_trap_rule_convergent_thm_proof_trap_rule_implicit_eq]) is equivalent to \[y=hg(y)+\beta.\] Let furthermore \[w_{0}=y_{n}.\] Since \(F\) is analytic \(g\) is continuous, so hypothesis (a) of Theorem 9.12 is satisfied. Consider now \[\left|g(v)-g(u)\right|=\frac{\left|F(t_{n+1},v)-F(t_{n+1},u)\right|}{2},\] for \(v,u\in B(w_{0},\rho\)). If \(\left|w_{0}\right|=\left|y_{n}\right|\le2M\) then \(B(w_{0},\rho)\subset B(0,2M+\rho),\) so using that \(y\to F(t_{n+1},y)\) has Lipschitz constant \(L\) in \(B(0,2M+\rho)\) by ([eq:ch9_trap_rule_convergent_thm_proof_L_Mprime]) and ([eq:ch9_trap_rule_convergent_thm_proof_c1_def])-([eq:ch9_trap_rule_convergent_thm_proof_h_bound]) it follows that \[\frac{\left|F(t_{n+1},v)-F(t_{n+1},u)\right|}{2}\le\frac{L}{2}\left|u-v\right|\le\frac{\lambda}{h}\left|u-v\right|.\] Thus if \(\left|w_{0}\right|=\left|y_{n}\right|\le2M\) then for all \(v,u\in B(w_{0},\rho)\) \[\left|g(v)-g(u)\right|\le\frac{\lambda}{h}\left|u-v\right|,\] verifying hypothesis (b) of Theorem 9.12. Lastly \[w_{1}=w_{0}+h\frac{F(t_{n},w_{0})+F(t_{n+1},w_{0})}{2},\] so using ([eq:ch9_trap_rule_convergent_thm_proof_c1_def])-([eq:ch9_trap_rule_convergent_thm_proof_h_bound]) \[\left|w_{1}-w_{0}\right|\le h\left|\frac{F(t_{n},w_{0})+F(t_{n+1},w_{0})}{2}\right|\le hM'\le\frac{1}{2}=(1-\lambda)\rho,\] and thus hypothesis (c) of Theorem 9.12 is satisfied. Therefore Theorem 9.12 can be applied for all \(h\) as in ([eq:ch9_trap_rule_convergent_thm_proof_h_bound]), and all \(n\) s.t. \(\left|y_{n}\right|\le2M,\) showing that there then is a unique \(y_{n+1}\) in \(\left\{ y:\left|y-y_{n}\right|\le\rho\right\}\) which solves ([eq:ch9_trap_rule_convergent_thm_proof_trap_rule_implicit_eq]).

Turning to the error bound, consider for \(n\) s.t. \(\left|y_{n}\right|\le2M\) \[e_{n+1}=y(t_{n+1})-y_{n+1}=y(t_{n+1})-\left\{ y_{n}+h\frac{F(t_{n},y_{n})+F(t_{n+1},y_{n+1})}{2}\right\} .\] Above we proved that \(\left|y_{n+1}\right|\le2M+\rho,\) and so \(\left|e_{n+1}\right|\) is at most \[\begin{array}{l} \left|y(t_{n+1})-\left\{ y(t_{n})+h\frac{F(t_{n},y(t_{n}))+F(t_{n+1},y(t_{n+1}))}{2}\right\} \right|\\ +\left(1+\frac{h}{2}L\right)\left|y(t_{n})-y_{n}\right|+\frac{h}{2}L\left|y(t_{n+1})-y_{n+1}\right|. \end{array}\] Since \(y(t)\) solves the ODE ([eq:ch9_numerical_time_stepping_def_IVP]) the first row equals \[\left|y(t_{n+1})-\left\{ y(t_{n})+h\frac{y'(t_{n})+y'(t_{n+1})}{2}\right\} \right|.\] Taylor expanding \(y(t)\) and \(y'(t)\) around \(t=t_{n}\) as in ([eq:ch9_trap_order_proof_error_y_taylor])-([eq:ch9_trap_order_proof_error_bound_from_taylor]) yields \[\left|y(t_{n+1})-\left\{ y_{n}+h\frac{y'(t_{n})+y'(t_{n+1})}{2}\right\} \right|\le ch^{3},\] for a constant \(c\) that does not depend on \(n\) (but does depend on \(y(t),t\in I\)). So far we have proved that \[\left|e_{n+1}\right|\le ch^{3}+\left(1+\frac{h}{2}L\right)\left|e_{n}\right|+\frac{h}{2}L\left|e_{n+1}\right|\tag{9.58}\] for \(h\) satisfying ([eq:ch9_trap_rule_convergent_thm_proof_h_bound]) and \(n\) s.t. \(\left|y_{n}\right|\le2M.\)

For \(h\) satisfying ([eq:ch9_trap_rule_convergent_thm_proof_h_bound]) we have \(\frac{h}{2}L\le\frac{1}{2}<1,\) so ([eq:ch9_trap_rule_convergent_thm_proof_errror_n+1_from_n_impl]) implies \[\left|e_{n+1}\right|\le\frac{1+\frac{h}{2}L}{1-\frac{h}{2}L}\left(\left|e_{n}\right|+ch^{3}\right).\tag{9.59}\] Also for \(h\) satisfying ([eq:ch9_trap_rule_convergent_thm_proof_h_bound]) \[\frac{1+\frac{h}{2}L}{1-\frac{h}{2}L}=1+\frac{hL}{1-\frac{h}{2}L}\le1+2hL\le3,\] so it follows from ([eq:ch9_trap_rule_convergent_thm_proof_errror_n+1_from_n]) that \[\left|e_{n+1}\right|\le(1+2hL)\left|e_{n}\right|+3ch^{3},\] for all \(n\) such that \(\left|y_{n}\right|\le2M\) and \(h\) satisfying ([eq:ch9_trap_rule_convergent_thm_proof_h_bound]). By induction it follows that \[\left|e_{n+1}\right|\le3ch^{3}\sum_{m=0}^{n}(1+2hL)^{m}\tag{9.60}\] for such \(n\) and \(h.\) It holds that \[\sum_{m=0}^{n}(1+2hL)^{m}=\frac{(1+2hL)^{n+1}-1}{2hL}\] and for \(n\) s.t. \(t_{0}+nh\le t_{\max}\) \[(1+2hL)^{n+1}\le e^{2hL(n+1)}\le e^{4hLn}\le e^{4L(t_{\max}-t_{0})}.\] Thus by ([eq:ch9_trap_rule_convergent_thm_proof_errror_as_sum]) it holds for all \(n\) s.t. \(\left|y_{n}\right|\le2M\) and \(h\) satisfying ([eq:ch9_trap_rule_convergent_thm_proof_h_bound]) that \[\left|e_{n+1}\right|\le c_{2}h^{2}\quad\quad\text{for}\quad\quad c_{2}=3c\frac{e^{4L(t_{\max}-t_{0})}-1}{2L}.\] Now assume that \(\left|y_{n}\right|\le2M\) for all \(n\) such that \(t_{0}+nh\le t_{\max}.\) Then the claim ([eq:ch9_trap_rule_convergent_thm_error_bound]) is proved, with \(c_{1}=\overline{c}_{1}.\)

If not, there is a first \(n\) such that \(\left|y_{n+1}\right|>2M.\) Then \(\left|y_{n}\right|\le2M,\) so \(\left|e_{n+1}\right|\le c_{2}h^{2}\) which implies that \[\left|y_{n+1}\right|\le\left|y_{n+1}-y(t_{n+1})\right|+\left|y(t_{n+1})\right|\le c_{2}h^{2}+M.\tag{9.61}\] Thus if one sets \[c_{1}=\min\left(\overline{c}_{1},\sqrt{\frac{M}{c_{2}}}\right)\] then provided \(0<h\le c_{1}\) the r.h.s. of ([eq:ch9_trap_rule_convergent_thm_proof_y_n+1_bound_for_contradiction]) is at most \(2M,\) which is a contradiction. Thus with this modified \(c_{1}\) and \(0<h\le c_{1}\) it is guaranteed that \(\left|y_{n}\right|\le2M\) for all \(n,\) so the claim ([eq:ch9_trap_rule_convergent_thm_error_bound]) follows.

Recall that strictly speaking the trapezoidal rule method as defined in Definition 9.9 can not be implemented on a computer because it requires the exact solution to the implicit equation ([eq:ch9_trapezoidal_rule_implicit_eq]). Let us formally define a trapezoidal rule method where the solution of the implicit equation is carried out by some “external” approximation function \(P.\) If \(P\) can be implemented on a computer, then this trapezoidal method can also be implemented.
Definition 9.15

(Trapezoidal rule method with approximation function \(P\)). If \(d\ge1\) and \(V=\mathbb{R}^{d}\) or \(V=\mathbb{C}^{d}\) and \(P(t_{0},t_{\max},F,h,t,y)\) is a \(V\)-valued function of \(t_{0},t_{\max},h,t\in[t_{0},t_{\max}]\) and \(F:[t_{0},t_{\max}]\times V\to V,\) then the trapezoidal rule method with approximation function \(P\) is the following time-stepping method.

Given \(d\ge1,\) \(V=\mathbb{R}^{d}\) or \(V=\mathbb{C}^{d},\) \(t_{0},t_{\max}\in\mathbb{R},t_{0}<t_{\max}\) , a function \(F:[t_{0},t_{\max}]\times V\to V,\) a \(y_{0}\in V\) and \(h>0,\) the method with approximation function \(P\) defines the approximations \(y_{n},1\le n\le\frac{t_{\max}-t_{0}}{h}\) by setting for \(n\ge0\) \[y_{n+1}=P(t_{0},t_{\max},F,h,t_{n},y_{n}).\]

Remark 9.16

For any \(F,y,t\) let \[w_{m+1}=y+\frac{1}{2}\left(F(t,y)+F(t+h,w_{m})\right),m\ge0\quad\quad w_{0}=y,\tag{9.64}\] be the fixed-point iteration considered above.

  1. If we fix \(N\ge1\) and define \(P(t_{0},t_{\max},F,h,t,y)=w_{N}\) we obtain the trapezoidal method where the approximation is done by running the fixed-point iteration ([eq:ch9_examples_for_P_fixed_point_iter]) a total of \(N\) times.

  2. If we fix \(\varepsilon>0\) and define \(P(t_{0},t_{\max},F,h,t,y)=w_{m}\) for the first \(m\) s.t. \[\left|w_{m}-\left\{ y+\frac{1}{2}\left(F(t,y)+F(t+h,w_{m})\right)\right\} \right|\le\varepsilon\] we obtain the trapezoidal rule method where the approximation is done by running the fixed-point iteration ([eq:ch9_examples_for_P_fixed_point_iter]) with an error tolerance of \(\varepsilon>0.\)

The next theorem proves that if the approximation function \(P\) approximates the true solution of the fixed point equation up to an error of order \(h^{3},\) then the trapezoidal rule method with this \(P\) is convergent and has global error of order \(h^{2}.\)

Theorem 9.17 Let \(\rho>0.\) Assume that \(P\) is an approximation function like in Definition 9.15 such that for any analytic \(F\) there are constants \(\hat{c}_{1},\hat{c}_{2}>0\) such that if \(0<h\le\hat{c}_{1},\) \(0\le n\le\frac{t_{\max}-t_{0}}{h},\) \(y_{n}\in V\) and \(\tilde{y}=P(t_{0},t_{\max},F,h,t_{n},y_{n})\) then \[\left|\tilde{y}-y_{n}\right|\le\rho\quad\quad\text{and}\quad\quad\left|\tilde{y}-h\frac{F(t_{n},y_{n})+F(t_{n+1},\tilde{y})}{2}\right|\le\hat{c}_{2}h^{3}.\] Then there are constants \(c_{1},c_{2}>0\) such that if \(0<h\le c_{1}\) then the global error of trapezoidal rule method with approximation function \(P\) satisfies the bound \[\max_{0\le n\le\frac{t_{\max}-t_{0}}{h}}\left|e_{n}\right|\le c_{2}h^{2}.\] In particular the method is convergent.
Remark 9.18 The approximation function of Remark 9.16 (2) with a \(h\)-dependent tolerance of \(\varepsilon=h^{3}\) is such a \(P.\)
Proof. Exercise on Sheet 9. Adaptation of the proof of Theorem 9.13.

Home

Contents

Weeks