Finite Difference Methods for Differential Equations

Solutions are functions.

Definition: Ordinary Differential Equations
An ordinary differential equation (ODE) is an equation that involves one of more derivatives of a function of a single variable.
Definition: Initial Value Problems

An initial value problem (IVP) is given by an ordinary differential equation of the form ${y^{\prime} \left(t\right) = f\left( y\left(t\right), t\right)}$ and initial value ${y\left(a\right)=y_a}$ for the unknown function $y\left(t\right)$.

Often ${a=0}$, so the initial condition reads ${y(0)=y_0}$.

Definition: One step methods
A numerical method for approximating the solution to a differential equation is called a one step method if the computed solution at time step $t_{n+1}$, denoted by $u_{n+1}$, only depends on the previous one, $t_i$, where $t_{n+1} = t_n + h$, for some small increment ${h=\Delta t}$.

Forward Euler Method

Definition:

Forward Euler approximates the derivative in an initial value problem using the forward difference approximation of first-order derivatives

$$ u_{n+1} = u_n + h f_n $$

where $u_n = u \left(t_n \right)$ and ${f_{n} = f\left( u_{n}, t_{n} \right)}$. The error is $\mathcal{O}\left( h^2 \right)$.

Backward Euler Method

Definition:

Backward Euler uses the backward finite difference approximation of the first-order derivative

$$ u_{n+1} = u_n + h f_{n+1} $$

where ${f_{n+1} = f\left( u_{n+1}, t_{n+1} \right) = f\left( u\left(t_{n+1} \right), t_{n+1} \right) }$. The error is $\mathcal{O}\left( h^2 \right)$.

Crank-Nicolson Method

Definition:

The Crank-Nicolson method is given by

$$ u_{n+1} = u_n + \dfrac{h}{2}\left( f_n + f_{n+1} \right). $$

Heun’s Method

Definition:

Heun’s method is given by

$$ u_{n+1} = u_n + \dfrac{h}{2}\left( f_n + f\left( u_n + h f_n, t_{n+1} \right) \right). $$
Definition:
A numerical method is said to be explicit if an approximation $u_{k+1}$ can be calculated directly from already computed values $u_i$ for ${i < k}$. Otherwise, the method is said to be implicit.

Often, implicit methods require, at each step $k+1$, the solution of a nonlinear equation for computing $u_{k+1}$.

Both the Forward Euler and Heun’s method are explicit. The Backward Euler and Crank-Nicolson methods are implicit.

Heun’s method can be interpreted as the Crank-Nicolson method with the approximation ${u_{n+1} \approx u_n + h f_n}$ replacing the explicit $f_{n+1}$ term, which depends on $u_{n+1}$.

Alternatively, the solution to the differential equation can be written as quadrature:

$$ y\left( t + h \right) = y\left( t \right) + \int_{t}^{t+h} f\left( y\left(\tau\right), \tau \right) \, \mathrm{d}\tau. $$

Thus, the Forward Euler method is the left Riemann sum, Backward Euler is the right Riemann sum and the Crank-Nicolson is the trapezoidal rule.

Analysis of One-Step Methods

Definition:

A function $f$ is Hölder continuous if there exists real constants ${C \ge 0}$ and ${\alpha \lt 0}$ such that

$$ \left| f\left(x\right) - f\left(y\right) \right| \le C \Vert x - y\Vert^\alpha $$

for all $x$ and $y$. If ${\alpha=1}$ the function is said to be Lipshitz continuous.

Definition: Consistency Error

Any explicit one-step method has the form

$$ u_{n+1} = u_n + h \Phi\left( t_n, u_n, f_n, h \right) $$

with $\Phi$ the increment function. For the exact solution to the differential equation, $y\left(t_n \right) = y_n$, the solution is

$$ y_{n+1} = y_n + h \Phi\left( t_n, y(t_n), f_n, h \right) + \varepsilon_n $$

which can be written as

$$ \tau_n = \dfrac{y_{n+1} - y_n}{h} - \Phi\left( t_n, y(t_n), f_n, h \right) $$

where $\varepsilon_n = h \tau_n$ for a ${\tau_n = \tau_n\left(h\right)}$ defined as the local truncation error at step $n$.

The consistency error is given by ${\tau = \max_n | \tau_n | }$.

A method is said to be consistent if $\lim\limits_{h \to 0} \Phi = f$. This means the increment function is a good approximation to the differential equation as the step size tends to zero.

Definition: Convergence

The global truncation error is the difference between the exact solution, $y_n = y(t_n)$ and the approximate solution, $u_n$, at the time $t_n$,

$$ \tau_{g,n} = \left| y_n - u_n \right| $$

which is given by

$$ \tau_{g,n} = \left| y_n - \left( u_0 + h\Phi\left(t_0, u_0, f_0, h\right) +h\Phi\left(t_1, u_1, f_1, h\right) + \ldots + h\Phi\left(t_n, u_n, f_n, h\right) \right) \right|. $$

If $\lim\limits_{h\rightarrow 0} u_{n+1} = y\left(t_{n+1} \right)$ the method is said to be locally convergent.

The scheme is convergent if the global truncation error goes to zero as the step size goes to zero, that is ${\lim\limits_{h \to 0} \max\limits_{n} | \tau_{g,n} | = 0}$.

Definition: Order
A one-step method is of order $p \in \mathbb{N}$, if for all ${t \in \left[ 0, T \right]}$, the solution satisfies the condition that ${\tau\left(h\right) = \mathcal{O}\left( h^p \right) }$ as ${h \rightarrow 0}$.
Definition: Zero Stable Methods

A method of the form

$$ u_{n+1} = u_n + h \Phi\left( t_n, u_n, f_n, h \right) $$

with $\Phi$ the increment function, is called zero-stable if there exists both a maximal step size, $h_{\mathrm{max}}$ and a constant, $C$, such that for all ${h \in \left[0, h_{\mathrm{max}} \right]}$ and for some ${\varepsilon \gt 0}$, then the following holds:

If, for all time-steps ${0 \le n \le N}$, there exists a ${\delta_n \le \varepsilon}$ and if a method is given by

$$ z_{n+1} = z_n + h \Phi\left( t_n, z_n, f_n\left(z_n, t_n\right), h \right) + \delta_{n+1} $$

with $z_0 = {y_0 +\delta_0}$, then

$$ \left\| z_n - u_n \right\| \le C \varepsilon \quad \text{for} \quad 0 \le n \le N. $$

If a method is zero-stable, then this means that small perturbations in the computations lead to small perturbations in the approximations.

Theorem:
If the increment function $\Phi$ of a one step numerical scheme is Lipshitz continuous for $u_n$ for any $h$ and $t_n$, then the one step method is zero-stable.
Theorem:

If the increment function $\Phi$ is

  • Lipshitz continuous for $y_n$ for any $h$ and $t_{n+1}$,

  • the method is consistent, i.e. $\lim\limits_{h \rightarrow 0} \tau \left(h \right) = 0$

and

  • if $\left| y_0 - u_0 \right| \rightarrow 0$ as $h \rightarrow 0$,

then the global error has

$$ \lim \limits_{h \rightarrow 0} \left| y_n - u_n \right| = 0. $$

Also, if the method is of order $p$ and if ${\left| y_0 - u_0 \right| = \mathcal{O}\left(h^p\right)}$ as ${h\rightarrow 0}$, then the convergence is of order $p$.

Definition: Absolute Stability

A numerical scheme for approximating the solution to the linear differential equation ${y^{\prime}\left(t\right) = \lambda y\left(t\right)}$ with ${\lambda \in \mathbb{C}}$ and initial condition ${y_0}$ is said to be absolutely stable if ${\left|u_n\right| \rightarrow 0}$ as ${n \rightarrow \infty}$, when ${\operatorname{Re}\left(\lambda\right) \lt 0}$ for a fixed value of $h$.

The region of absolute stability of a numerical method for ${y^{\prime}\left(t\right) = \lambda y\left(t\right)}$ is the set

$$ \lbrace h \lambda \in \mathbb{C} \quad \text{such that} \quad | u_n | \rightarrow 0 \quad \text{as} \quad t_n \rightarrow \infty \rbrace. $$

Zero stability considers when ${h \rightarrow 0}$, whereas absolute stability concerns behaviour as ${t_n \rightarrow \infty}$.

Lax Equivalence

Definition:

A problem is said to be well-posed if

  • a unique solution exists for any initial conditions and
  • the solution’s behaviour changes continuously with the initial conditions.

A problem which does not have these properties is said to be ill-posed.

Theorem:
The Lax Equivalence theorem or Lax–Richtmyer theorem is the equivalent form of the fundamental theorem of numerical analysis for differential equations, which states that a consistent finite difference method for a well-posed linear initial value problem, the method is convergent if and only if it is stable.

Runge-Kutta And Multi-Step Schemes

If an ordinary differential equation is given by ${\dot{y}= f\left( y, t \right)}$, then a Runge-Kutta scheme takes the form

$$ u_{n+1} = u_n + h F\left( t_n, u_n, h; f \right) $$

where $F$ is an increment function given by

$$ F\left( t_n, u_n, h; f \right) = \sum\limits_{i=1}^s b_i k_i $$

with $k_i$ defined as

$$ k_i = f\left( u_n + h\sum_{j=1}^s a_{i, j} k_j, t_n + c_i h \right) \quad i=1, \ldots, s $$

where $s$ is referred to as the number of stages of the method.

Thus, an $s$-stage scheme is characterised by coefficients $b_i$, $c_i$ and $a_{i,j}$ for ${i, j = 1,\ldots s}$

If the matrix defined by the elements $a_{i,j}$ is lower triangular, i.e. ${a_{i,j}=0}$ for all ${i \le j}$, then each $k_i$ can be computed explicitly in terms of the previous coefficients ${k_1, \ldots, k_{i-1}}$. Such schemes are explicit, otherwise they are said to be implicit.

The components of a Runge-Kutta scheme are expressed in a Butcher array

$$ \begin{array}{c|cccc} c_1 & a_{1,1} & \ldots & a_{1,s} \\ \vdots & \vdots & \ddots & \vdots \\ c_s & a_{s,1} & \ldots & a_{s,s} \\ \hline & b_1 & \ldots & b_s \end{array} $$

The local truncation error, which characterises the error induced at each step of the method, is defined as

$$ h \tau_{n+1}\left(h\right) = y\left(t_{n+1} \right) - y\left(t_{n} \right) - h F\left( t_n, u_n, h; f \right). $$

It can be shown that ${\tau\left(h\right) = \max\left| \tau_{n+1}\left(h\right) \right| \rightarrow 0}$ as ${h \rightarrow 0}$ if and only if ${\sum_{i=1}^s b_i = 1}$.

A Runge-Kutta method is of order ${p \ge 1}$ if ${\tau\left( h \right) = \mathcal{O}\left( h^p \right)}$ as ${h \rightarrow 0}$.

The order of an $s$-stage explicit Runge-Kutta method cannot be greater than $s$. Additionally, there does not exist a $s$-stage explicit Runge-Kutta method with order $s$ if ${s \ge 5}$.

The order of an $s$-stage implicit Runge-Kutta method cannot be greater than $2s$.

Runge-Kutta 4

The most common form of the Runge-Kutta method is the fourth order Runge-Kutta method (RK4). It takes the form:

$$ u_{n+1} = u_n + \dfrac{h}{6} \left( k_1 + 2k_2 + 2 k_3 + k_4 \right) $$

where

$$ \begin{align*} k_1 & = f\left( u_n, t_n \right), \\\ k_2 & = f\left( u_n + \dfrac{h}{2} k_1, t_n + \dfrac{h}{2} \right), \\\ k_3 & = f\left( u_n + \dfrac{h}{2} k_2, t_n + \dfrac{h}{2} \right), \\\ k_4 & = f\left( u_n + h k_3, t_n + h \right). \end{align*} $$
Example:

An .ipynb notebook with an example of solvers for odes can be accessed online [here].

It can be downloaded from [here] as a python file or downloaded as a notebook from [here].

Partial Differential Equations

Definition:
A partial differential equation is relation involving a unknown function of several free variables and partial derivatives with respect to these variables.

A partial differential equation is said to be linear if it only contains linear terms of the unknown and its derivatives. For example, a second-order linear partial differential equation for a function ${u=u(x,t)}$

$$ a_1(x,t) u_{xx} + a_2(x,t) u_{xt} + a_3 u(x,t)_{tt} + a_3(x,t) u_x + a_4(x,t) u_t + a_5(x,t) u = f(x,t). $$

where $u_{xx} = \dfrac{\partial^2 u}{\partial x^2}$, etc.

For finite-difference schemes, all partial derivatives must be approximated by discretized differential operators.