Finite Difference Methods for Differential Equations
Solutions are functions.
An initial value problem (IVP) is given by an ordinary differential equation of the form ${y^{\prime} \left(t\right) = f\left( y\left(t\right), t\right)}$ and initial value ${y\left(a\right)=y_a}$ for the unknown function $y\left(t\right)$.
Often ${a=0}$, so the initial condition reads ${y(0)=y_0}$.
Forward Euler Method
where $u_n = u \left(t_n \right)$ and ${f_{n} = f\left( u_{n}, t_{n} \right)}$. The error is $\mathcal{O}\left( h^2 \right)$.
Backward Euler Method
where ${f_{n+1} = f\left( u_{n+1}, t_{n+1} \right) = f\left( u\left(t_{n+1} \right), t_{n+1} \right) }$. The error is $\mathcal{O}\left( h^2 \right)$.
Crank-Nicolson Method
Heun’s Method
Often, implicit methods require, at each step $k+1$, the solution of a nonlinear equation for computing $u_{k+1}$.
Both the Forward Euler and Heun’s method are explicit. The Backward Euler and Crank-Nicolson methods are implicit.
Heun’s method can be interpreted as the Crank-Nicolson method with the approximation ${u_{n+1} \approx u_n + h f_n}$ replacing the explicit $f_{n+1}$ term, which depends on $u_{n+1}$.
$$ y\left( t + h \right) = y\left( t \right) + \int_{t}^{t+h} f\left( y\left(\tau\right), \tau \right) \, \mathrm{d}\tau. $$Thus, the Forward Euler method is the left Riemann sum, Backward Euler is the right Riemann sum and the Crank-Nicolson is the trapezoidal rule.
Analysis of One-Step Methods
for all $x$ and $y$. If ${\alpha=1}$ the function is said to be Lipshitz continuous.
with $\Phi$ the increment function. For the exact solution to the differential equation, $y\left(t_n \right) = y_n$, the solution is
$$ y_{n+1} = y_n + h \Phi\left( t_n, y(t_n), f_n, h \right) + \varepsilon_n $$which can be written as
$$ \tau_n = \dfrac{y_{n+1} - y_n}{h} - \Phi\left( t_n, y(t_n), f_n, h \right) $$where $\varepsilon_n = h \tau_n$ for a ${\tau_n = \tau_n\left(h\right)}$ defined as the local truncation error at step $n$.
The consistency error is given by ${\tau = \max_n | \tau_n | }$.
A method is said to be consistent if $\lim\limits_{h \to 0} \Phi = f$. This means the increment function is a good approximation to the differential equation as the step size tends to zero.
If $\lim\limits_{h\rightarrow 0} u_{n+1} = y\left(t_{n+1} \right)$ the method is said to be locally convergent.
The scheme is convergent if the global truncation error goes to zero as the step size goes to zero, that is ${\lim\limits_{h \to 0} \max\limits_{n} | \tau_{g,n} | = 0}$.
with $\Phi$ the increment function, is called zero-stable if there exists both a maximal step size, $h_{\mathrm{max}}$ and a constant, $C$, such that for all ${h \in \left[0, h_{\mathrm{max}} \right]}$ and for some ${\varepsilon \gt 0}$, then the following holds:
$$ z_{n+1} = z_n + h \Phi\left( t_n, z_n, f_n\left(z_n, t_n\right), h \right) + \delta_{n+1} $$with $z_0 = {y_0 +\delta_0}$, then
$$ \left\| z_n - u_n \right\| \le C \varepsilon \quad \text{for} \quad 0 \le n \le N. $$If a method is zero-stable, then this means that small perturbations in the computations lead to small perturbations in the approximations.
If the increment function $\Phi$ is
-
Lipshitz continuous for $y_n$ for any $h$ and $t_{n+1}$,
-
the method is consistent, i.e. $\lim\limits_{h \rightarrow 0} \tau \left(h \right) = 0$
and
- if $\left| y_0 - u_0 \right| \rightarrow 0$ as $h \rightarrow 0$,
Also, if the method is of order $p$ and if ${\left| y_0 - u_0 \right| = \mathcal{O}\left(h^p\right)}$ as ${h\rightarrow 0}$, then the convergence is of order $p$.
A numerical scheme for approximating the solution to the linear differential equation ${y^{\prime}\left(t\right) = \lambda y\left(t\right)}$ with ${\lambda \in \mathbb{C}}$ and initial condition ${y_0}$ is said to be absolutely stable if ${\left|u_n\right| \rightarrow 0}$ as ${n \rightarrow \infty}$, when ${\operatorname{Re}\left(\lambda\right) \lt 0}$ for a fixed value of $h$.
The region of absolute stability of a numerical method for ${y^{\prime}\left(t\right) = \lambda y\left(t\right)}$ is the set
$$ \lbrace h \lambda \in \mathbb{C} \quad \text{such that} \quad | u_n | \rightarrow 0 \quad \text{as} \quad t_n \rightarrow \infty \rbrace. $$Zero stability considers when ${h \rightarrow 0}$, whereas absolute stability concerns behaviour as ${t_n \rightarrow \infty}$.
Lax Equivalence
A problem is said to be well-posed if
- a unique solution exists for any initial conditions and
- the solution’s behaviour changes continuously with the initial conditions.
A problem which does not have these properties is said to be ill-posed.
Runge-Kutta And Multi-Step Schemes
If an ordinary differential equation is given by ${\dot{y}= f\left( y, t \right)}$, then a Runge-Kutta scheme takes the form
$$ u_{n+1} = u_n + h F\left( t_n, u_n, h; f \right) $$where $F$ is an increment function given by
$$ F\left( t_n, u_n, h; f \right) = \sum\limits_{i=1}^s b_i k_i $$with $k_i$ defined as
$$ k_i = f\left( u_n + h\sum_{j=1}^s a_{i, j} k_j, t_n + c_i h \right) \quad i=1, \ldots, s $$where $s$ is referred to as the number of stages of the method.
Thus, an $s$-stage scheme is characterised by coefficients $b_i$, $c_i$ and $a_{i,j}$ for ${i, j = 1,\ldots s}$
If the matrix defined by the elements $a_{i,j}$ is lower triangular, i.e. ${a_{i,j}=0}$ for all ${i \le j}$, then each $k_i$ can be computed explicitly in terms of the previous coefficients ${k_1, \ldots, k_{i-1}}$. Such schemes are explicit, otherwise they are said to be implicit.
$$ \begin{array}{c|cccc} c_1 & a_{1,1} & \ldots & a_{1,s} \\ \vdots & \vdots & \ddots & \vdots \\ c_s & a_{s,1} & \ldots & a_{s,s} \\ \hline & b_1 & \ldots & b_s \end{array} $$$$ h \tau_{n+1}\left(h\right) = y\left(t_{n+1} \right) - y\left(t_{n} \right) - h F\left( t_n, u_n, h; f \right). $$It can be shown that ${\tau\left(h\right) = \max\left| \tau_{n+1}\left(h\right) \right| \rightarrow 0}$ as ${h \rightarrow 0}$ if and only if ${\sum_{i=1}^s b_i = 1}$.
A Runge-Kutta method is of order ${p \ge 1}$ if ${\tau\left( h \right) = \mathcal{O}\left( h^p \right)}$ as ${h \rightarrow 0}$.
The order of an $s$-stage explicit Runge-Kutta method cannot be greater than $s$. Additionally, there does not exist a $s$-stage explicit Runge-Kutta method with order $s$ if ${s \ge 5}$.
The order of an $s$-stage implicit Runge-Kutta method cannot be greater than $2s$.
Runge-Kutta 4
The most common form of the Runge-Kutta method is the fourth order Runge-Kutta method (RK4). It takes the form:
$$ u_{n+1} = u_n + \dfrac{h}{6} \left( k_1 + 2k_2 + 2 k_3 + k_4 \right) $$$$ \begin{align*} k_1 & = f\left( u_n, t_n \right), \\\ k_2 & = f\left( u_n + \dfrac{h}{2} k_1, t_n + \dfrac{h}{2} \right), \\\ k_3 & = f\left( u_n + \dfrac{h}{2} k_2, t_n + \dfrac{h}{2} \right), \\\ k_4 & = f\left( u_n + h k_3, t_n + h \right). \end{align*} $$Partial Differential Equations
A partial differential equation is said to be linear if it only contains linear terms of the unknown and its derivatives. For example, a second-order linear partial differential equation for a function ${u=u(x,t)}$
$$ a_1(x,t) u_{xx} + a_2(x,t) u_{xt} + a_3 u(x,t)_{tt} + a_3(x,t) u_x + a_4(x,t) u_t + a_5(x,t) u = f(x,t). $$where $u_{xx} = \dfrac{\partial^2 u}{\partial x^2}$, etc.
For finite-difference schemes, all partial derivatives must be approximated by discretized differential operators.