Interpolation

Interpolation

Definition: Interpolating Functions

Given as set of points $p_0$, $\ldots$, ${p_n \in \mathbb{R}}$ and corresponding nodes $u_0$, $\ldots$, ${u_n \in \mathbb{R}}$, a function ${f : \mathbb{R} \rightarrow \mathbb{R}}$ with ${f(u_i) = p_i}$ is an interpolating function.

This can be generalised to higher dimensions, i.e. ${f : \mathbb{R} \rightarrow \mathbb{R}^N }$.

Definition: Polynomial Interpolation

If the interpolating function is a polynomial, it can be written as

$$ p\left( u \right) = \sum\limits_{i=0}^{n} \alpha_i \varphi_i \left( u \right) $$

so that at every node $u_j$, the polynomial satisfies $p\left( u_j \right) = \sum\limits_{i=0}^{n} \alpha_i \varphi_i \left( u_j \right)$. Thus, solving for all the values of $\alpha$ which fit the interpolating function to the data, leads to a linear system of the form

$$ \Phi \alpha = p $$

where $p$ is the vector defined the polynomial evaluated at the node points, i.e. ${p=p\left( u_j \right)}$ and $\Phi$ is the collocation matrix, whose entries are given by the polynomials evaluated at the node points

$$ \begin{equation*} \Phi = \left( \begin{array}{cccc} \varphi_0 (u_0) & \varphi_1 (u_0) & \cdots & \varphi_n (u_0) \\\ \vdots & & & \vdots \\\ \varphi_0 (u_n) & \cdots & \cdots & \varphi_n (u_n) \end{array} \right). \end{equation*} $$

Thus, to fit the polynomials to the data $\alpha = \Phi^{-1} p$.

The collocation matrix is invertible if and only if the set of functions $\varphi$ are linearly independent.

Definition: Lagrange Polynomials

The Lagrange form of an interpolating polynomial is given by

$$ \begin{equation*} p \left( x\right) = \sum\limits_{i=0}^{n} p_i l_i\left(x\right) \end{equation*} $$

where $l_{i} \in \mathbb{P}_{n}$ such that $l_{i}\left( x_{j} \right) = \delta_{ij}$. The polynomials $l_i\left(x\right) \in \mathbb{P}_n$ for $i=0, \ldots, n$, are called characteristic polynomials and are given by

$$ \begin{equation*} l_{i} \left( x \right) = \prod \limits_{\substack{j = 0,\newline{}j \ne i}}^{n} \dfrac{x - x_j}{x_i - x_j}. \end{equation*} $$

By construction, the collocation matrix is the identity matrix. Thus, there is no need to invert the matrix to solve for the weights $\alpha$.

Definition: Newton Interpolation

Newton interpolation interpolates a set of points $(x_i, y_i)$ as ${p \left( x\right) = \sum\limits_{i=0}^{n} \alpha_i n_i\left(x\right)}$ using a linear combination of Newton basis polynomials, which are defined as

$$ n_0(x) = 1, \quad n_i(x) = \left(x - x_0 \right) \left(x - x_1 \right) \cdots \left(x - x_{i-1} \right). $$

By construction, $\alpha_0=y_0$, and subsequent terms must be solved by evaluating the interpolating polynomial at increasing orders, leading to the formula

$$ \alpha_{i+1} = \dfrac{ y_{i+1} - p_i \left( x_{i+1} \right) }{ n_i \left( x_{i+1} \right) }. $$
Example: Lagrange Interpolation

An .ipynb notebook with an example of interpolation using Lagrange polynomials can be accessed online [here].

Example: Runge's Phenomenon

An .ipynb notebook with an example of Runge’s phenomenon can be accessed online [here].

It can be downloaded from [here] as a python file or downloaded as a notebook from [here].

Theorem:

Aitken’s algorithm is an iterative process for evaluating Lagrange interpolation polynomials at an arbitrary point, $u^*$, without explicitly constructing them. If the interpolating polynomial is given by $p$, and is derived from $n$ data points $(u_i, y_i)$ for $i=0,\ldots,n$

$$ p\left( u \right) = \sum\limits_{i=0}^{n} p_{i}^{n} l_i^{n}\left( u \right) $$

The interpolation is achieved by constructing a series of polynomials, evaluated at the ${u=u^{*}}$, where $p_{i}^{k}\left( u \right)$ is given by

$$ p_i^{k+1}\left(u\right) = p_i^k\left(u\right) \left( \dfrac{u - u_{n-k}}{u_i - u_{n-k}} \right) + p_{n-k}^k\left(u\right) \left( 1- \dfrac{u - u_{n-k}}{u_i - u_{n-k}} \right) $$

with initial values $p_i^0 = y_i^{\vphantom{0}}$.

$$ \begin{equation*} \begin{array}{lllll} p\left( u_0 \right) = p_{0}^0 & & & \\\ & p_0^1 & & \\\ p\left( u_1 \right) = p_{1}^0 & & p_0^2 & \\\ & p_1^1 & & p_0^3 \\\ p\left( u_2 \right) = p_{2}^0 & & p_1^2 & \\\ & p_2^1 & & \\\ p\left( u_3 \right) = p_{3}^0 & & & \end{array} \end{equation*} $$

where the coefficients are evaluated from left to right.

Piecewise Polynomial Interpolation

Spline Interpolation

Definition:
A function $s\left(u\right)$ is called a spline of degree $k$ on the domain ${[a, b]}$ if ${s \in C^{k-1}\left( [a, b] \right)}$ and there exists nodes ${a = u_0 \lt u_1 \lt \ldots \lt u_m = b}$ such that $s$ is a polynomial of degree $k$ for ${i = 0, \ldots m-1}$.
Definition: B-Splines

A spline is said to be a b-spline if it is of the form

$$ s\left(u\right) = \sum\limits_{i=0}^{m} \alpha_{i} \mathcal{N}_{i}^{n} \left(u \right) $$

where $\mathcal{N}^n$ are the basis spline functions of degree $n$ with minimal support. (That is they are positive in the domain and zero outside). The functions are defined recursively. Let $u_i$ be the set of nodes ${u_0, u_1, \ldots, u_m}$, then

$$ \begin{equation*} \mathcal{N}_{i}^{0} \left(u \right) = \left\{ \begin{array}{ll} 1 & \quad \text{for} \quad u_i \le u \le u_{i+1} \\ 0 & \quad \text{else.} \end{array} \right. \end{equation*} $$

and

$$ \mathcal{N}_{i}^{n} \left(u \right) = \alpha_i^{n-1}\left(u \right) \mathcal{N}_{i}^{n-1} \left(u \right) + \left( 1 - \alpha_{i+1}^{n-1}\left(u \right)\right) \mathcal{N}_{i+1}^{n-1} \left(u \right) $$

where

$$ \alpha_{i}^{n-1}\left(u \right) = \dfrac{u - u_i}{u_{i+n} - u_i} $$

is a local parameter.

Given data with nodes $u_i$ and values $p_i$, to interpolate with splines, of order $n$, requires solving

$$ \text{Find} \quad s = \sum\limits_{i=0}^{m} \alpha_i \mathcal{N}_{i}^{n} \left( u\right) \quad \text{such that} \quad s\left(u_i \right) = p_i \quad \text{for} \quad i=0, \ldots, m $$

which is matrix form is ${\Phi \alpha = p}$, where the collocation matrix, $\Phi \in \mathbb{R}^{\left(m+1\right) \times \left(m+1\right)}$ is given by

$$ \Phi = \left( \begin{array}{ccc} \mathcal{N}_{0}^{n} \left(u_0\right) & \cdots & \mathcal{N}_{m}^{n} \left(u_0\right) \\\ \vdots & & \vdots \\\ \mathcal{N}_{0}^{n} \left(u_m\right) & \cdots & \mathcal{N}_{m}^{n} \left(u_m\right) \end{array} \right). $$

Least-Squares Approximation

Definition: Least-Squares Approximation

Given a set of points $y=\left( y_0, y_1, \ldots y_n \right)$ at nodes $x_0$, seek a continuous function of $x$, with a given form characterized by $m$ parameters $\beta=\left( \beta_0, \beta_1, \ldots, \beta_m \right)$, i.e. $f\left( x, \beta \right)$, which approximates the points while minimizing the error, defined by the sum of the squares

$$ E = \sum\limits_{i=0}^{n} \left( y - f\left(x_i, \beta \right) \right)^2. $$

The minimum is found when

$$ \dfrac{\partial E}{\partial \beta_j} = 0 \quad \text{for all } \quad j=1, \ldots m $$

i.e.

$$ -2 \sum\limits_{i=0}^{n} \left(y_i - f\left(x_i, \beta_j \right) \right) \dfrac{\partial f\left( x_i, \beta \right)}{\partial \beta_j} = 0 \quad \text{for all } \quad j=1, \ldots m. $$
Definition: Linear Least-Squares Approximation

If the function $y(x)$ is a function of the form

$$ y = \sum\limits_{j=1}^{m} \beta_j \varphi_j \left( x \right) $$

where $\varphi_j$ are linearly independent, i.e. there is no non-zero constants $c_1$ and $c_2$ such that ${c_1 \varphi_i + c_2 \varphi_j =0}$ for all $x$. Then the least squares problem can be expressed as

$$ E\left( \beta \right) = \sum\limits_{i=0}^n \left( y_i - \sum\limits_{j=0}^m \beta_j \varphi_j\left( x_i \right) \right)^2 $$

This has a minimum when

$$ \dfrac{\partial E}{\partial \beta_j} = \sum\limits_{j=1}^m \left( \sum\limits_{i=1}^n \varphi_j\left( x_i \right) \varphi_k \left( x_i \right) \right) = 0. $$

Thus, the weights $\beta$ can be determined by solving the normal equations,

$$ \Phi^T \Phi \beta = \Phi^T y, $$

i.e. $\beta = \left( \Phi \Phi^T \right)^{-1} \Phi y$, where $\Phi$ is the collocation matrix.