1. As are many of the examples and exercises of these lecture notes, this was borrowed from H. S. Bear - Differential Equations, A Concise Course.

  2. This loan-word from German is used in mathematics and means assumption or guess.

  3. v3: added “implicit”

  4. Caveat: No claim is made regarding the real-world usefulness or realism of this model, it has been formulated simply to provide an instructive example which is somewhat plausible.

  5. week 4: gave this equation a number, so the numbering of subsequent equations changed.

  6. week 2 v2: added formal definition of domain of \(H\)

  7. week 2 v2: added interval explicitly for clarity

  8. week 2 v2: added \(=[0,t_{1})\) and changed \(t\in[0,\frac{1}{y_{0}})\) to \(t\in[0,t_{1})\)

  9. week 2 v2: added this sentence fixing \(I=[0,t_{1})\)

  10. week 2 v2: before it just said \(I\subset[0,\frac{1}{y_{0}})\)

  11. Meaning highest degree term is \(r^{2},\) with coefficient \(+1.\)

  12. Example borrowed from the book of H. S. Bear, p. 145.

  13. A previous version of this chapter had an invalid argument for the following incorrect claim: For any \(A\in\mathbb{C}^{2\times2},\) if \(z\) is a solution of \(z=Az\) and \(v\) is an eigenvector \(v\) of \(A\) when eigenvalue \(\lambda\) then the projection \(z(t)\cdot v\) of \(z(t)\) onto \(v\) satisfies \(z(t)\cdot v=ce^{t\lambda}\) for some constant \(c.\) This is correct only under the additional assumption that \(\lambda\) is not a repeated eigenvalue. If \(\lambda\) is a repeated eigenvalue then \(z(t)\cdot v\) can take the more general form \((c_{1}+tc_{2})e^{t\lambda}.\)

  14. This chapter is heavily inspired by Chapter I.1 of A First Course in the Numerical Analysis of Differential Equations by Arieh Iserles.

  15. The Jacobian is the matrix with entries \(\left(D_{y}G(h_{0},y_{0})\right)_{ij}=\partial_{y_{j}}G_{i}(h_{0},y_{0})\) for \(i,j=1,\ldots,d.\)

  16. As for the previous chapter, and probably for the coming chapter, the exposition here borrows heavily from A First Course in the Numerical Analysis of Differential Equations by Arieh Iserles.

  17. That is, the unique lowest degree polynomial that interpolates them.

  18. From p. 154 in Section II.2 of “Solving Ordinary Differential Equations I” by Hairer, Nørsett and Wanner.

  19. A matrix is sparse if most of its entries are zero.

Home

Contents

Weeks