This was a course that was delivered by Tyrone Rees in Numerical Solutions of Differential Equations I (B6.1 for undergrads and B1 for students on the MSc in Mathematical Modelling and Scientific Computing) at the University of Oxford.  It is archived here for posterity.

Further reading

The following textbooks are well-written descriptions of the material we cover in this course:

  • A. Iserles, A First Course in the Numerical Analysis of Differential Equations (Cambridge University Press, second edition, 2009). ISBN 978-0-521-73490-5 [Chapters 1-6, 16].
  • R. LeVeque, Finite difference methods for ordinary and partial differential equations (SIAM, 2007). ISBN 978-0-898716-29-0 [Chapters 5-9].
  • E. Süli and D. Mayers, An Introduction to Numerical Analysis (Cambridge University Press, 2006). ISBN 0-521-00794-1 [Chapter 12].

Ian Sobey's lectures from last year (typed up by Kathryn Gillow) cover the same material in roughly the same order. They are available to download.

The last question on each problem sheet is a finals question (re-typed to use the notation we use in lectures). Try and tackle this question as you would in the exam, taking around 40 minutes to answer it. This question is optional: it will be marked if you answer it, but the grade is for your information only. If you find you don't have time on a given week, feel free to skip it (but you are encouraged to try it at some point, as it's excellent preparation for the exam).

I'll post information from the lectures here as the course progresses. See also Ian Sobey's notes here.

Lecture 1

Today we looked a little at places where differential equations are solved numerically in practice. We also discussed existence and uniqueness, and stated Picard's theorem.

The notes from the lecture are here. The presentation at the start can be found here.

Further reading

Apollo 11

You can read more about the print out of the source code for the lunar module here, and the GitHub repository of the code can be found here.

Finding Dory

The Khan Academy website containing the fluid animation from Pixar can be found here.

Self-driving cars

The paper about Stanley that I showed can be read here.

Picard's theorem and Lipschitz continuity

There's a nice description of Lipschitz continuity, and what it means for the numerical solution of ODEs, in LeVeque (Chapter 5.2). See Suli and Mayers (Chapter 12.1) for a discussion, including a proof of Picard's theorem.

Lecture 2

Today we looked at an example where Picard's theorem is not satisfied, and at what Picard's theorem actually tells you. We also introduced our first numerical scheme: Euler's method.

The notes from the lecture are here.

Matlab demo comparing Euler's method, implicit Euler, and the trapezium rule method.

Further reading

Euler's method

There are many different ways to motivate Euler's method: see, e.g., Suli and Mayers Section 12.2, Leveque Section 5.3, Iserles Section 1.2.

Lecture 3

Today we looked at theta-methods, of which Euler's method, implicit Euler, and the trapezium rule method are examples. We also introduced the concepts of global error and truncation error, and worked out what these are for Euler's method.

The notes from the lecture are here.

Matlab demo comparing Euler's method with a step size of h, and that of h/2.

Further reading

Theta methods

Iserles describes the trapezium rule method in Section 1.3 the theta method in Section 1.4 (including proofs of error bounds).

 

Leveque gives a brief discussion in Section 5.3.

Suli and Mayers describe the trapezium rule method in Section 12.4. They have a nice description of how the Lipschitz constant for Phi can be related to that for f().

Error

These concepts are described in LeVeque (Sections 5.4 and 5.5) and Suli and Mayers (Section 12.2).

Lecture 4

We began the lecture by talking a little about the error analysis for a general theta method. We then introduced the general framework of one-step methods, and introduced the concept of consistency. We then introduced an important class of methods for solving ODE IVPs: Runge-Kutta methods.

The notes from the lecture are here.

Matlab demo comparing the performance of improved Euler's method to the Trapezium rule method and explicit Euler.

Further reading

One step methods

You can read more about one-step methods in Suli and Mayers (Section 12.2), and about consistency (and its relation to convergence) in Section 12.3.

 

Runge-Kutta methods

These are described in detail in LeVeque (Section 5.7), Iserles (Chapter 3), and Suli and Mayers (Section 12.5). Iserles has a particularly nice description which links these methods to the corresponding theory for quadrature rules.

Lecture 5

We presented examples of Runge-Kutta methods of order 2, 3 and 4. We then described a basic adaptive algorithm based on running two Runge-Kutta methods of the same order simultaneously, one with step length h, and one with h/2.

The notes from the lecture are here.

Matlab demo comparing the performance of Runge-Kutta methods of order 1, 2 and 4.

Matlab demo comparing the performance of an adaptive Runge-Kutta method.

These demos require the additional solvers euler.mimproved_euler.mrk4.m and rk4a.m.

Further reading

Adaptive methods

Adaptivity as presented here is mentioned in Iserles, Section 5.3, but only in passing. A nice description (which should be available in college libraries) is found in Numerical Recipes in C (Section 16.2).

Lecture 6

In this lecture we introduced the idea of embedded Runge-Kutta methods. We also discussed symplectic methods.

The notes from the lecture are here.

Matlab demo of Euler's method applied to a Hamiltonian system. Change V_euler(i) to V_euler(i+1) on line 37 to see a (symplectic) hybrid Euler method.

Further reading

Embedded Runge-Kutta methods

See Leveque, Section 5.7.1, or Iserles, Section 5.3.

See also the research papers:

 

Symplectic Methods

See Iserles, Section 5.4. The Guardian article about the prediction of a 9th planet is here, and the paper is here.

Lecture 7

We introduced multi-step methods, and defined the truncation error, consistency, and the order of the method. We defined zero-stability, and stated the equivalent Root Condition.

The notes from the lecture are here

Further reading

Multi-step methods

See Leveque, Section 5.9, Iserles, Section 2.2, or Suli and Mayers, Section 12.6.

Lecture 8

We explored the idea of zero-stability more. We proved necessity of the root condition. We gave examples of zero-stable methods, and one method that has a high order, but isn't zero-stable. We defined convergence for a multistep method.

Matlab demo of an unstable 5th order method, and a Matlab demo of an unstable 1st order method.

The notes from the lecture are here

Further reading

Root condition

The proof presented in lectures is based on that in Suli and Mayers (Section 12.7). See there for a (sketch of) the proof of the Lemma about solutions of recurrence relations. For proof of sufficiency see, e.g., Theorem 3.1 in Gautschi's Numerical Analysis: an introduction (1997).

A slightly different approach is taken in Ian Sobey's notes (Lecture 7), and you might want to read over that too. See also LeVeque, Section 6.4.

Lecture 9

We talked about convergence. We showed that zero-stability is necessary for convergence, and that consistency is necessary for convergence. We stated Dahlquist's theorem.

The notes from the lecture are here

Further reading

Dahlquist's Theorem

A full proof of Dahlquist's theorem can be found in Gautschi's Numerical Analysis: an Introduction (Theorem 6.3.4), or Henrici's Discrete Variable Methods in Ordinary Differential Equations (Theorem 5.10).

Lecture 10

We talked about absolute stability, defined what it means for a method to be A-stable, and the concept of stiffness.

The notes from the lecture are here (updated 3/01/17)

Matlab demo of the Van der Pol oscillator (a stiff ODE).

Further reading

Absolute Stability/Stiffness

There's a nice discussion in Iserles, Chapter 4, that goes deeper than in lectures.

Stability is discussed in detail in LeVeque, Chapter 7. Chapter 8 discusses stiffness, and explains some method that can be used to solve such problems.

See also Suli and Mayers, Section 12.11.

Lecture 11

We introduced parabolic PDEs. We showed, via Fourier analysis, an analytic solution. We developed our first numerical scheme: Explicit Euler in time, with central differences in space.

The notes from the lecture are here

Matlab demo of the heat equation on a rod, and a demo of the same system solved using central differences in space and Explicit Euler in time.

Further reading

Fourier Transform

For further reading on this, see, e.g., Kreyszig's Engineering Mathematics, Chapters 11 and 12.

Finite Difference methods

Chapter 9 of LeVeque and Chapter 13 of Iserles covers this material. See especially Section 9.2 of LeVeque for a slightly different motivation.

A highly recommended book for material in this part of the course is Morton and Mayers: Numerical Solution of Partial Differential Equations (Chapters 2 and 3).

Lecture 12

We defined practical stability, and used Fourier analysis to derive a condition on the time step so that the Explicit Euler scheme is stable. We showed that the Implicit Euler scheme is unconditionally stable.

The notes from the lecture are here

Matlab demo of the heat equation on the rod solved using central differences in space and implicit Euler in time.

Further reading

The semidiscrete Fourier transform

For an accessible introduction to the semidiscrete Fourier transform, see Chapter 2 of Spectral Methods in Matlab by Trefethen.

Stability

See also Section 13.5 of Iserles.

Lecture 13

We saw how to analyse stability by inserting the Fourier mode, and derived conditions for the theta scheme to be practically stable. We showed how implicit methods are applied in practice, and how to implement Dirichlet boundary conditions.

The notes from the lecture are here

Matlab demo that shows the phenomenon of aliasing.

Further reading

Fourier mode

See also Section 9.6 of LeVeque or Section 2.7 of Morton and Mayers.

Lecture 14

We showed how to represent Neumann boundary conditions in the theta-scheme.

We defined the truncation error and global error, and bounded these quantities for the explicit Euler scheme.

The notes from the lecture are here

Matlab demo that shows the implementation of mixed boundary conditions.

Further reading

Boundary conditions

See also Section 2.13 of Morton and Mayers.

Error analysis

See also Sections 2.5 and 2.6 of Morton and Mayers, Section 13.1 of Iserles, or Section 9.1 of Leveque. For the extension to the theta-method, see, e.g., Section 2.10 of Morton and Mayers.

Lecture 15

We defined the concept of von-Neumann stability, and proved a condition on the amplification function for a scheme to be von-Neumann stable. We stated the Lax Equivalence theorem.

We proved a condition that the theta-method applied to the unsteady heat equation with Dirichlet boundary conditions must satisfy in order for the scheme to satisfy a discrete maximum principle.

The notes from the lecture are here

Matlab demo that shows Crank-Nicolson violating the discrete maximum principle.

Further reading

von Neumann Stability

See LeVeque (Section 9.6), or Morton and Mayers, Section 2.7.

The Lax equivalance Theorem

For a proof, see the original paper (from 1956) here, or the book of Richtmyer and Morton (1967), pages 34-36.

See also LeVeque (Theorem 9.2), Iserles (Theorem 13.3), or Morton and Mayers (Theorem 5.1 -- which includes a proof of sufficiency).

Discrete Maximum Principle

See Morton and Mayers, Section 2.11.

Lecture 16

We showed how the theory we've developed in 1D can be extended to 2D.

The notes from the lecture are here

Matlab demo that shows the unsteady heat equation in two spatial dimensions. To run this, you'll also need the files get_initial_data.m and snowman_small.png.

Further reading

2D problems

See Morton and Mayers, Section 3.1 (and the rest of the chapter for some additional information about the challenges of 2D/3D that we didn't have time to address in this course), or LeVeque, Section 9.7.