Back

Interpolation and approximation of functions

In the field of numeric mathematics several methods exist to approximate and/or interpolate a function based on support values.
Together the n+1 support values xi^=(x,y),i=0,...,n describe the function to approximate.
Those support values will be transformed into interpolation conditions according to the base we want the function to be displayed in.

In the following paragraph we will learn that there are different ways to describe a polynomial function, yet all resulting polynomials are essentially the same underlying base polynomials just converted into different bases.

Monom-Base

Transformation
To display a polynomial function in the monom base one creates (n+1) equations, the interpolation conditions yi=p(xi)=a0+a1xi+...+anxin,i=0,,n These equations can be displayed as a linear system of equations Va=y with y=(y0,,yn)T and V* the Vandermonde-Matrix.
This matrix holds a determinant det(V)=0xj<xkn(xjxk) which is always det(V)0 for pair-wise unique support values, hence the interpolation problem has always one solution.

Evaluation To evaluate a polynomials in the monom base one can use Horner's method p(x)=a0+x(a1+x(a2++x(an1+xan)))

Python implementation

import numpy as np

def polMonom(a, x):
    """ Evaluates y = a[0] + a[1]*x + ... + a[n]*x^p """
    y = np.ones(x.shape) * a[0]
    for j in range(1, len(a)):
        y += a[j]*(x**j)   # later: use Horner scheme
    return y

Lagrange-Base

Transformation
Another base is the Lagrange base.
Given n+1 pair-wise unique support values x0,...,xn Li(x)=j=0,jinxxjxixj,i=0,,n are called Lagrange-fundamental polynomials or Lagrange-Base-Polynomials.
Not all of them need to be computed, tho.

if k=iLi(x)=1 if kiLi(x)=0 Evaluation
p(x)=inyiLi(x)Πn Which meets all of the interpolation requirements p(xk)=inyiLi(xk)=yk,k=0,...,n

Python implementation

import numpy as np


def L(xk: float, x: np.array, i: int) -> float:
    out: float = 1.0
    for j in range(x.shape[0]):
        if i == j: continue

        out *= (xk - x[j])/(x[i]-x[j])
    return out

def lagrange(xk: np.array, x: np.array, y: np.array) -> np.array:
    out: np.array = np.zeros(xk.shape)
    for k in range(xk.shape[0]):
        for i in range(y.shape[0]):
            if i == k:
                out[k] += y[i]
            else:
                out[k] += y[i] * L(xk[k], x, i)
    return out

Newton-Base

Transformation
For n+1 support values (x0,xn):

ω0(x)=1 ωi(x)=j=0i1(xxj),i=1,n Representation: p(x)=i=0nbiωi(x) bi=f[xr+1,...,xs]f[xr,...,xs1]xsxr

Python implementation

import numpy as np

def koeffNewtonBasis(xk, yk):
    """ recursive schema of div. differences - iterativ implemented """
    m = len(xk)
    F = np.zeros((m,m))
    F[:,0] = yk   
    for j in range(1, m):     # j-te dividierte Diffenzen
        for i in range(j, m):
            F[i, j] = (F[i, j-1] - F[i-1, j-1])/(x[i] - x[i-j])
            #print(f'F[{i},{j}] = (F[{i},{j-1}] - F[{i-1},{j-1}])/(x[{i}]-x[{i-j}])')        # for debugging
    return np.diag(F)

Evaluation
Using Horner's Method

Python implementation

import numpy as np
def polNewtonHorner(b, xk, x):
    """ y = b[0] + b[1]*(x - xk[0]) + ... + b[n]*(x - x[0])* ... *(x - x[n-1]), using Horner's-Method """
    y = np.ones(x.shape) * b[-1]
    for j in range(len(b)-2, -1, -1):
        y =  b[j] + (x - xk[j])*y
    return y