Principles of Numerical Mathematics

Find $x$ such that $F(x,d)=0$ for a set of data, $d$ and $F$, a functional relationship between $x$ and $d$.

Well Posed Problems

Definition: Well-Posed Problems

A problem is said to be well-posed if

  • a solution exists,
  • the solution is unique,
  • the solution’s behaviour changes continuously with the initial conditions.

A problem which does not have these properties is said to be ill-posed.

Condition Number

Definition: Relative and Absolute Condition Numbers
$$ \begin{equation*} K ( d ) = \sup\limits_{\delta d \in \mathcal{D}} \dfrac{ \left\Vert \delta x \right\Vert / \left\Vert x \right\Vert}{\left\Vert \delta d \right\Vert / \left\Vert d \right\Vert} \end{equation*} $$$$ \begin{equation*} K_{\mathrm{abs}} ( d ) = \sup\limits_{\delta d \in \mathcal{D}}\dfrac{ \left\Vert \delta x \right\Vert }{\left\Vert \delta d \right\Vert} \end{equation*} $$

Stability

Consider a well-posed problem, a construct a sequence of approximate solutions via a sequence of approximate solutions and data, i.e. $F_n (x_n, d_n)=0$

Definition: Consistency
$$ \begin{equation*} \lim\limits_{n\rightarrow\infty} F_n (x,d) \rightarrow F(x,d). \end{equation*} $$

The method is strongly consistent if $F_n(x,d)=0$ for all $n\ge0$.

Definition: Asymptotic and Relative Condition Numbers
$$ \begin{equation*} K_n \left( d_n \right) = \sup\limits_{\delta d_n \in \mathcal{D}_n } \dfrac{ \left\Vert \delta x_n \right\Vert / \left\Vert x_n \right\Vert }{\left\Vert \delta d_n \right\Vert / \left\Vert d_n \right\Vert} \end{equation*} $$

and

$$ K_{n,\mathrm{abs}} \left( d_n \right) = \sup \limits_{\delta d_n \in \mathcal{D}_n} \dfrac{ \left\Vert \delta x_n \right\Vert }{ \left\Vert \delta d_n \right\Vert } $$

then the asymptotic condition number is

$$K^{\mathrm{num}} (d) = \lim \limits_{k\rightarrow \infty} \sup\limits_{n \le k} K_n \left( d_n \right).$$$$ \begin{equation*} K_{\mathrm{abs}}^{\mathrm{num}} \left( d \right) = \lim \limits_{k\rightarrow \infty} \sup \limits_{n \le k} K_{n, \mathrm{abs}} \left( d_n \right). \end{equation*} $$
Definition: Convergence
$$ \begin{equation*} \forall \varepsilon > 0, \quad \exists \\, n \quad \text{such that} \quad \left\Vert x(d) - x_n\left( d+ \delta d_n \right) \right\Vert \le \varepsilon \end{equation*} $$
Theorem: Lax-Ritchmyer
A numerical algorithm converges if and only if it is consistent and stable.

Matrix Analysis

Theorem:

Let $A \in \mathbb{R}^{n \times n}$, then

  1. $\lim\limits_{k \rightarrow \infty}A^k =0 \Leftrightarrow \rho \left( A \right) < 1$. Where $\rho\left(A \right)$ is the largest absolute value of the eigenvalues of $A$. This is called the spectral radius

  2. The geometric series, $\sum\limits_{k=0}^{\infty}A^k$ is convergent if and only if $\rho \left( A \right) < 1$. Then in this case, the sum is given by

$$ \begin{equation*} \sum\limits_{k=0}^{\infty}A^k = \left( I - A \right)^{-1} \end{equation*} $$
  1. Thus, if $\rho \left( A \right) < 1$, the matrix $I-A$ is invertible and $$ \begin{equation*} \dfrac{1}{1+\left\Vert A \right\Vert} \le \left\Vert\left( I - A \right)^{-1}\right\Vert \le \dfrac{1}{1-\left\Vert A \right\Vert} \end{equation*} $$ where $\left\Vert \cdot \right\Vert$ is an induced matrix norm such that $\left\Vert A \right\Vert <1$.
Theorem:
$$ \begin{equation*} \left( A + \delta A \right)\left( x + \delta x \right) = b + \delta b \end{equation*} $$$$ \begin{equation*} \left( A + \delta A \right)\left( x + \delta x \right) \le \dfrac{K(A)}{1- K(A) \left\Vert \delta A \right\Vert_2 / \left\Vert A \right\Vert_2} \left(\dfrac{\left\Vert \delta b \right\Vert_2}{\left\Vert b \right\Vert_2} + \dfrac{\left\Vert \delta A \right\Vert_2}{\left\Vert A \right\Vert_2} \right) \end{equation*} $$
Theorem:
$$ \begin{equation*} A \left( x + \delta x \right) = b + \delta b \end{equation*} $$$$ \begin{equation*} \dfrac{1}{K(A)} \dfrac{\left\Vert \delta b \right\Vert}{\left\Vert b \right\Vert} \le \dfrac{\left\Vert \delta x \right\Vert}{\left\Vert x \right\Vert} \le K(A) \dfrac{\left\Vert \delta b \right\Vert}{\left\Vert b \right\Vert} \end{equation*} $$
Theorem:
$$ \begin{equation*} \dfrac{\left\Vert x+ \delta x \right\Vert}{\left\Vert x \right\Vert} \le \dfrac{1+\gamma K(A)}{1-\gamma K(A)} \end{equation*} $$$$ \begin{equation*} \dfrac{\left\Vert \delta x \right\Vert}{\left\Vert x \right\Vert} \le \dfrac{2\gamma K(A)}{1-\gamma K(A)} \end{equation*} $$
Theorem:
$$ \begin{equation*} \bigl\Vert A^{-1} \bigr\Vert \le \dfrac{\left\Vert C \right\Vert}{1 - \left\Vert R \right\Vert} \end{equation*} $$$$ \begin{equation*} \dfrac{\left\Vert R \right\Vert}{\left\Vert A \right\Vert} \le {\left\Vert {C - A^{-1}} \right\Vert} \le \dfrac{\left\Vert C \right\Vert \left\Vert R \right\Vert }{1 - \left\Vert R \right\Vert} \end{equation*} $$$$ \begin{equation*} \delta A = C^{−1} − A = −(AC − I)C^{−1} = −R C^{−1} \end{equation*} $$$$ \begin{equation*} \begin{aligned} \left\Vert \delta A \right\Vert & \le \left\Vert R \right\Vert \left\Vert C^{-1} \right\Vert \newline & \le \dfrac{ \left\Vert R \right\Vert \left\Vert A \right\Vert }{1 - \left\Vert R \right\Vert} \end{aligned} \end{equation*} $$