Global polynomial interpolation refers to the formation of a polynomial function as an interpolation function within the entire interpolation area. For the basic knowledge of polynomial interpolation, see "Basic Theory of Computation".
In the expression of the single-base interpolation and Newton interpolation, the Horner nesting algorithm for the value of the expression at a certain point, see "Horner nesting Algorithm".
1. Single-entry (monomial) base interpolation
1) The function base of the interpolation function base interpolation is the simplest single-entry method: $$\phi_j (t) =t^{j-1}, j=1,2,... N;\quad F (t) =p_{n-1} (t) =x_1+x_2t+x_3t^2+...x_ The coefficients of the solution demanded by nt^{n-1}=\sum\limits_{j=1}^nx_jt^{j-1}$$ are single-factor $x _1,x_2,... x_n$, where the lower mark number of the,... n is still used instead of the 0,1,2 corresponding to the single-type exponent, ..., N-1 's subscript is simply a need to be consistent with and before and after discussion.
2) Stacking factor
Single-line base interpolation using a single-function base, if there are m discrete data points need interpolation, set to use the N-item-type base:
$ $x _1+t_1x_2+t_1^2x_3+...+t_1^{n-1}x_n=y_1\\ x_1+t_2x_2+t_2^2x_3+...+t_2^{n-1}x_n=y_2\\ ................ .... \ x_1+t_mx_2+t_m^2x_3+...+t_m^{n-1}x_n=y_m$$ coefficient matrix is a matrix of one $m \times n$ ($m \leq n$), Vandermonde (Vandermonde) matrix /c2>:
$$\begin{bmatrix}1&t_1&t_1^2&...&t_1^{n-1}\\1&t_2&t_2^2&...&t_2^{n-1}\\ ... &...&...&...&...\\1&t_n&t_n^2&...&t_n^{n-1}\end{bmatrix} \begin{bmatrix}x_1\\x_ 2\\...\\x_n\end{bmatrix}=\begin{bmatrix}y_1\\y_2\\...\\y_n\end{bmatrix}$$ according to the discussion in the basic theory of computation, the function base of polynomial interpolation is linearly independent, and As long as the discrete data point 22 is different, the matrix row is also linearly independent , which ensures that the matrix must be full-rank. At this point , the superposition factor has and only one set of solutions when and only if M=n . Therefore, the number of n= interpolation bases = number of discrete data points = polynomial number +1.
3) problem conditions and algorithmic complexity
The complexity of generating the Vandermonde matrix is roughly at $O (n^2) $ magnitude; since the Vandermonde matrix has no good properties, neither sparsity nor symmetry, it can only be solved using standard dense matrix decomposition (such as LU), and the complexity is $O (n^3) $ magnitude. Therefore, the complexity of the problem is in $O (n^3) $ magnitude.
The problem of Vandermonde matrix is that when the dimensionality of the matrix is getting higher, the problem of solving the linear equations becomes more and more morbid, the condition number is more and more large, and from another angle, the single-type base itself tends to be similar, and the number of times will be more and more parallel (see). This will result in increasing the uncertainty of the superimposed coefficients as the data points increase, so although the single-base is very simple, it is often not used when inserting values. If you still use a single-type base, there are sometimes two things you can do to improve the problem: panning (shifting) and scaling (scaling), which will be $ ((T-C)/d) ^{j-1}$ as the base. Common panning and zooming transfer all data points through a linear transformation to the interval [ -1,1], i.e. $c = (t_1+t_n)/2,d= (t_n-t_1)/2$.
4) algorithm implementation
Using MATLAB to implement the single-line interpolation code is as follows:
function [Polyfunc, vecx, condition] = Monopi (Vect, vecy, shift, scale)% calculates single-type interpolation polynomial coefficients% input four parameters: interpolation point line vector vect, interpolation point function Value line vector vecy, pan shift, scale;% output two parameters: interpolation polynomial various coefficients row vector vecx, matrix condition number condition;% Set default value: If only two parameters are entered, do not pan do not scale if nargin==2 shift = 0; Scale = 1;end% solution coefficient vecsize = Size (Vect, 2); basis = (Vect-shift * ones (1, vecsize))/scale; % determine the base at each data point of the value vector Basismat = Vander (basis); Condition = cond (Mat); % use the Vander command to generate the basis Vandermonde matrix and to find the condition number [L, U] = Lu (Mat); VECX = (u\ (l\vecy. ')). '; VECX = FLIPLR (VECX); % standard LU decomposition solution matrix equation% generate handle function polyfuncsyms t;monomial = (t-shift)/scale; vecsize = Size (vecx, 2); Funeval = VECX (vecsize); for i = vecsize:-1:2 % the algorithm for generating functions uses Horner algorithm to improve efficiency Funeval = VECX (i-1) + MONOMIAL*FUNEVAL;ENDP Olyfunc = matlabfunction (funeval, ' Vars ', t); end
For example, for points: $ ( -2,-27), (0,-1), (1,0) $ it has a unique two-time interpolation polynomial: $p _2 (t) =-1+5t-4t^2$. Call the above code:
% command line input [Polyfunc, vecx, condition] = Monopi (vect, vecy)% command line output Polyfunc = Function_handle with the following values: @ (t)-t.* (t.* 4.0-5.0) -1.0vecx = -1 5 -4condition = 6.0809
Fully consistent with expectations.
2. Lagrange (Lagrange) interpolation
1) Interpolation function base
Lagrange interpolation employs a well-designed polynomial base, each of which is a n-1 polynomial, and each base function is zero at the rest of the points if and only if it takes 1 at the number of point I . This polynomial base is designed like this:
$ $l _j (t) =\frac{(t-t_1) (t-t_2) ... (T-t_{j-1}) (T-t_{j+1}) ... (T-t_n)} {(t_j-t_1) (t_j-t_2) ... (T_j-t_{j-1}) (T_j-t_{j+1}) ... (T_j-t_n)} =\frac{\prod\limits_{k=1,k\neq j}^n (T-t_k)}{\prod\limits_{k=1, K\neq j}^n (t_j-t_k)}$$ so there are:
$ $l _j (t_i) =\delta_{ij}, i,j=1,2,... n $$ where $\delta$ is Kronecker (Kronecker) notation, when two subscripts are equal to 1, otherwise zero; $\delta_{ij}$ can also be understood as a second-order tensor, That is, the unit matrix. As long as the individual $t_i$ are brought into the definition, the above formula is easy to verify. This means that the solution of the superposition coefficients of Lagrange interpolation will produce very good properties, namely:
2) Stacking factor
The interpolation function that needs to be solved is: $f (t) =\sum\limits_{k=1}^nx_kl_k (t) $, and is known:
$ $l _1 (t_1) x_1+l_2 (t_1) x_2+...+l_n (t_1) x_n=y_1$$
$ $l _1 (t_2) x_1+l_2 (t_2) x_2+...+l_n (t_2) x_n=y_2$$
$$... ... ... ... ...$$
$ $l _1 (t_n) x_1+l_2 (t_n) x_2+...+l_n (t_n) x_n=y_n$$ written in matrix form is:
$$\begin{bmatrix}l_1 (t_1) &l_2 (t_1) &...&l_n (t_1) \\l_1 (t_2) &l_2 (t_2) &...&l_n (t_2) \ \ ... &...&...&...\\l_1 (t_n) &l_2 (t_n) &...&l_n (t_n) \end{bmatrix} \begin{bmatrix}x_1\\x_2\\...\ \x_n\end{bmatrix}=\begin{bmatrix}1&0&, .... &0\\0&1&, .... &0\\. &, .... &, .... &, .... \\0&0&, .... &1\end{bmatrix} \begin{bmatrix}x_1\\x_2\\...\\x_n\end{bmatrix}=i\boldsymbol{x}=\begin{bmatrix}x_1\\x_2\\ ... \\x_n\end{bmatrix}=\begin{bmatrix}y_1\\y_2\\...\\y_n\end{bmatrix}$$
Its matrix is the unit matrix, so it is directly derived $x _i=y_i$, $f (t) =p_{n-1} (t) =y_1l_1 (t) +y_2l_2 (t) +...+y_nl_n (t) =\sum\limits_{k=1}^ny_kl_k (t) $
3) problem conditions and algorithmic complexity
The coefficient matrix generated by Lagrange interpolation is a unit matrix, and there is no problem of condition sickness at all , only the values of each data point can be taken as coefficients. Similarly, there will be no complexity in solving the coefficients .
However, as a cost, the function evaluation cost is large . The Horner nested algorithm can be used for the evaluation of single and Newton interpolation expressions, and the total calculation is roughly controlled by N-Times floating-point number addition and N-times floating-point multiplication, but the algorithm does not apply to Lagrange interpolation expressions. The complexity of Lagrange interpolation is at least 3n times the floating-point multiplication (except) method and 2n floating-point addition, which is still after all the coefficients (the coefficients after merging the interpolation coefficients and the denominator of the base) have been processed, and the processing coefficients themselves may need to $n the complexity of the ^2$ level. In addition, Lagrange interpolation expressions are not conducive to differential and integral , and it is inconvenient to have all the bases rewritten when new data points are added to the Newton interpolation. In a word, Lagrange interpolation is an interpolation that is very easy to construct and is well suited for discussing some problems theoretically, but still has many disadvantages in numerical computation.
4) algorithm implementation
The MATLAB code of a way to implement LaGrand day polynomial interpolation is as follows. The output here is a polynomial function handle. These functions (handles) only need to be called after the function name with the parentheses variable, i.e. Polyfun (3).
function [Polyfun] = Lagrangepi (vect, vecy)% generate Lagrange interpolation polynomial% input two parameters: interpolated point line vector vect, function line vector vecy Output one parameter: interpolation polynomial function handle polyfunvecsize = Size (Vect, 2); Syms t F term;f = 0;for i = 1:vecsize term = vecy (i); For j = 1:vecsize if (J ~= i) Term = term* (T-vect (j))/(Vect (i)-Vect (j)); End End F = f + Term;endpolyfun = Matlabfunction (f); end
However, since the function expression of the polynomial form is signed into a symbolic variable, it means that each of the coefficients has undergone a separate calculation, and that each molecule also needs to be computed separately, which will make the function evaluation of the Lagrange interpolation expression (functions evaluation) The complexity of the $O (n^2) $ magnitude; If you want to allow each evaluation to be controlled at the $O (n) $ magnitude, you have to calculate all the coefficients except for the function base molecules that contain an unknown amount, and some techniques are needed to evaluate the value. This can be accomplished in the following writing methods:
function [coefficients] = Newlagrangepi (vect, vecy)% generate Lagrange interpolation polynomial coefficients (computed denominator)% input two parameters: interpolation point line vector vect, function line vector vecy ;% Output one parameter: Coefficient line vector of interpolation polynomial coefficients;vecsize = Size (Vect, 2); coefficients = zeros (1, vecsize); for i = 1:vecsize TMP = Vecy (i); % obtain the coefficient corresponding to the base function y_i for j = 1:vecsize % divide it by the denominator of the function base if (j~=i) tmp = tmp/(Vect (i)-Vect (j)); End End coefficients (i) = Tmp;endend
In addition to the function of coefficients, a special evaluation function is required:
function [Funeval] = Evalagrangepi (coefficients, vect, vecy, t)% Lagrange interpolation polynomial valuation% Input four parameters: Lagrange interpolation full coefficient line vector coefficients, interpolation point line vector vect, function line vector vecy, evaluation point t;% output a parameter: function at t value funevalvecsize = size (Vect, 2 ); [found, index] = ismember (t, Vect), if found % if the evaluation point is the original data point, the function value of the data point in the original information is returned directly Funeval = Vecy (index); else % otherwise, The product of all (T-t_i) is calculated first funeval = 0; product = 1; For i = 1:vecsize Product = product* (T-vect (i)); End for i = 1:vecsize % Then, the value of each item is computed, multiplied by the factor of the item and divided by the item that the molecule does not exist (t-t_i) funeval = funeval + coefficients (i) * product/(T-vect (i)); Endendend
The same is true for three points $ ( -2,-27), (0,-1), (1,0) $, called Lagrange interpolation method:
Vect = [-2, 0, 1]; Vecy = [-27,-1, 0];% command line input coefficients = Newlagrangepi (vect, vecy)% command line output coefficients = -4.5000 0.5000 0% command Line Input val = Evalagrangepi (coefficients, vect, Vecy,-2)% command line output val = -27% command line input val = Evalagrangepi (coefficients, vect, VEC Y, 0.5)% command line output val = 0.5000
All outputs are consistent with the actual polynomial interpolation $f (t) =p_2 (t) =-1+5t-4t^2$.
3. Newton (Newton) interpolation
1) Interpolation function base
The single-substrate is very concise, the disadvantage is that the solution of the equation group is a dense Vandermonde matrix, may be very sick, the complexity is also very high; the Lagrange base is more sophisticated, because the coefficient matrix is the unit matrix, the solution is very simple and accurate, the disadvantage is that the generation of expressions and function evaluation complexity is very high. The Newton interpolation method provides a tradeoff between the two: the base is not as complex as the Lagrange's function base, and the solution is much simpler than the single substrate , which derives from the base of the Newton interpolation: $$\pi_j (t) =\prod\limits_{k=1}^{j-1 } (T-t_k) = (t-t_1) (t-t_2) ... (T-t_{j-1}), J=1,..., the particularity of the n$$ relative Uraglange substrate ($l _j (t_i) =\delta_{ij}$), the Newton interpolation substrate has a weak point of nature: $$\pi_j (t_i) =0,\forall i<j$$ The polynomial shape to be found is as follows: $f (t) =p_{n-1} (t) =\sum\limits_{j=1}^nx_j\pi_j (t) =x_1+x_2 (t-t_1) +...+x_n (t-t_1) (t-t_2) ... (T-t_{n-1}) $
2) Stacking factor
$$\pi_1 (t_1) x_1+\pi_2 (t_1) x_2+...+\pi_n (t_1) x_n=y_1$$
$$\pi_1 (t_2) x_1+\pi_2 (t_2) x_2+...+\pi_n (t_2) x_n=y_2$$
$$............$$
$$\pi_1 (t_n) x_1+\pi_2 (t_n) x_2+...+\pi_n (t_n) x_n=y_n$$ written in matrix form:
$$\begin{bmatrix}\pi_1 (t_1) &\pi_2 (t_1) &...&\pi_n (t_1) \ \pi_1 (t_2) &\pi_2 (t_2) &...&\pi_n (t_2) \\...&...&...&...\\ \pi_1 (t_n) &\pi_2 (t_n) &...&\pi_n (t_n) \end{bmatrix} \begin{ Bmatrix}x_1\\x_2\\...\\x_n\end{bmatrix}=\begin{bmatrix}1&0&...&0\\1&t_2-t_1&...&0\\ ... &...&...&...\\1&t_n-t_1&...& (t_n-t_1): (t_n-t_{n-1}) \end{bmatrix}=\begin{bmatrix}y_1\\y_2\\...\\y_n\end{bmatrix}$$ that is to say, the Newton interpolation coefficient solution matrix is a lower triangular matrix .
3) algorithmic nature and algorithmic complexity
We know that for the lower triangular matrix, the use of forward generation method can be easily solved, its time complexity is $O (n^2) $. Another look at the generation of the lower triangular matrix, the complexity is also $O (n^2) $ calculation. Therefore, the total complexity of solving the interpolation coefficients is $O (n^2) $ magnitude, which is less than the solution of dense matrices. Of course, the solution of Newton interpolation coefficients does not need to be shown to generate a matrix, and then the matrix decomposition of the standard routines, Newton interpolation has several methods of generating coefficients to choose from, including the difference quotient method, the use of recursion or iteration can achieve good results, but also to avoid overflow.
In addition, the Newton interpolation expression is applied to the Horner nesting algorithm when evaluating the value (great!). This will control the complexity of the evaluation within the magnitude of the $O (n) $, and if the coefficients Billagrange the interpolation expressions are more efficient.
Newton interpolation method has the following superior properties:
3.1 When adding data points, a new Newton interpolation polynomial can be generated simply by adding a function base and adding a factor . This is actually understandable, because when you add a $ (t_{n+1},y_{n+1}) $, all elements of the lower triangular matrix are not changed, just a new column and row, and a new column to the right of the matrix is all zeros. This means that all coefficients $x _1,x_2,... x_n$ not only satisfies the original linear equation group, but also must be the part of the solution of the new linear equation group. The base part is just a new base, so the new interpolation polynomial is just the old interpolation polynomial plus one, i.e. $f (t) ^*=p_n (t) =p_{n-1} (t) +x_{n+1}\pi_{n+1} (t) $. For this new item $x _{n+1}\pi_{n+1} (t) $ how the coefficients are taken, simply bring in the new data points: $ $f (t_{n+1}) ^*=p_{n-1} (T_{n+1}) +x_{n+1}\pi_{n+1} (t_{n+1 }) =y_{n+1}\quad \rightarrow \quad x_{n+1}=\frac{y_{n+1}-p_{n-1} (T_{n+1})}{\pi_{n+1} (t_{n+1})}$$ The complexity of generating a new coefficient roughly requires one function evaluation and one base evaluation, roughly $O (n) $ complexity. If you use this formula iteratively, you can generate all Newton interpolation polynomial coefficients, the complexity $O (n^2) $, and the matrix solution is roughly flat.
3.2 Difference commercial law can also realize the coefficients of generating Newton interpolation polynomial. Among them, the difference quotient $f []$ is defined as:
$ $f [t_1, T_2,... t_k]=\frac{f[t_2, T_3, ... t_{k}-f[t_1, t_2,... t_{k-1}]}{t_k-t_1}$$ and Newton polynomial coefficients are taken from: $ $x _j=f[t_1, t_2 ... t _j]$$ for this can have a proof:
$ $f [t_1]=y_1, X_1=y_1=f[t_1];\quad f[t_1, T_2]=\frac{f[t_2]-f[t_1]}{t_2-t_1}=\frac{y_2-y_1}{t_2-t_1},x_2=\frac {Y_2-y_1} {t_2-t_1}=f[t_1, t_2]
$$
If $x_j=f[t_1, t_2, ... t_j]=\frac{f[t_2, t_3,... t_j]-f[t_1, t_2,... t_{j-1}]}{t_j-t_1}$ For any $j \leq k-1$ is established when the $j =k$:
? Consider for the point $ (t_1, y_1), (t_2,y_2),... (T_{k-1},y_{k-1}) $ Newton interpolation polynomial $p _1 (t) $; for points $ (t_2, y_2), (T_3, Y_3),... $$ (T_k, Y_k) $ Newton interpolation polynomial $ P_2 (t) $, and the known differential interpolation coefficients are established for any $j \leq k-1$, thus:
$$
P_1 (t) =\sum\limits_{j=1}^{k-1}a_j\pi_j (t), \quad p_2 (t) =\sum\limits_{j=2}^{k}b_j\pi_j (t), \qquad a_j=f[t_1,... t_{j }],b_j=f[t_2,... T_j]
$$
by $p _1 (t) $ over Point $ (t_1, Y_1) $ to $ (t_{k-1},y_{k-1}) $, $p _2 (t) $ over Point $ (t_2,y_2) $ to $ (t_k,y_k) $, constructs an interpolation polynomial:
$$
P (t) =\frac{t_k-t}{t_k-t_1}p_1 (t) +\frac{t-t_1}{t_k-t_1}p_2 (t)
$$
There is the polynomial through the point $ (t_1, Y_1) $ to $ (t_k,y_k) $, so that is the Newton interpolation polynomial to be asked. Bring in $p _1 (t), p_2 (t) $ expression, and compare the highest secondary coefficients at both ends of the equation:
$$
P (t) =\sum\limits_{j=1}^kx_j\pi_j (t) =\frac{t_k-t}{t_k-t_1}\sum\limits_{j=1}^{k-1}a_j\pi_j (t) +\frac{t-t_1}{t_k-t _1}\sum\limits_{j=2}^{k}b_j\pi_j ' (t) \ \
X_k=\frac{-1}{t_k-t_1}a_{k-1}+\frac{1}{t_k-t_1}b_k=\frac{f[t_2,... t_k]-f[t_1,... t_{k-1}]}{t_k-t_1}=f[t_1, ... t_ K]\qquad \square
$$
This proves that I was quoted by Michael S. Floater of the University of Oslo's mathematics department in the Newton interpolation handout.
4) algorithm implementation
Based on 3.1, the interpolation coefficients can be generated iteratively using the new node method. The implementation code using this idea is as follows:
function [Vecx_new, vect_new] = NEWNPI (Vect, VECX, newpoint)%newton interpolation algorithm added node functions,% input three parameters: the original interpolation point line vector vect, the original interpolation coefficient line vector vecx, Added newpoint;% input two parameters: new coefficient line vector vecx_new, new interpolation point line vector vect_new;vecsize = size (vecx, 2); vecx_new = zeros (1, vecsize + 1); vecx _new (1:vecsize) = Vecx;vect_new = Zeros (1, vecsize + 1); Vect_new (1:vecsize) = Vect; Vect_new (vecsize + 1) = Newpoint (1);p _new = Hornernpi (Vect, VECX, Newpoint (1)); W_new = 1;for i = 1:vecsize w_new = w_new * (newpoint (1)-Vect (i)); Endvecx_new (vecsize + 1) = (Newpoint (2)-p_new)/ W_new;end
The new node function NEWNPI can be used alone, and can be repeatedly called to generate Newton interpolation coefficients, as follows:
function [Polyfun, vecx] = NEWNEWTONPI (cvect, cvecy)% using the new node function is gradually updated to produce the Newton interpolation polynomial coefficients;% input two parameters: interpolation point line Vector cvect, Interpolation coefficient line vector cvecx;% output two parameters: polynomial function handle polyfun, coefficient line vector vecx;% iteration generation coefficient row vector vect = cvect (1); vecx = Cvecy (1); vecsize = Size ( Cvect, 2); for i=2:vecsize [vecx, Vect] = NEWNPI (Vect, VECX, [Cvect (i), cvecy (i)]); end% Using Horner nesting algorithm to generate polynomial function handle syms f t; f = vecx (vecsize); for i = vecsize-1:-1:1 f = vecx (i) + (T-cvect (i)) * F;endpolyfun = Matlabfunction (f); end
Another approach is to use a difference quotient. The following is the implementation code. Unlike the previous argument, this code uses an algorithm that is not recursive, but a forward-like function-value cache.
function [Polyfun, vecx] = RECNEWTONPI (vect, vecy)% uses the difference quotient to generate the Newton interpolation polynomial coefficients;% input two parameters: interpolation point line vector vect, function value cvecy;% output two parameters: polynomial function polyfun, coefficient line vector vecx;vecsize = size (Vect, 2);D IV = DIAG (vecy);% quotient generation coefficient line vector vecxfor k = 1:vecsize-1 for i = 1:vecsize-k Div (i, i+k) = (Div (i+1, i+k)-Div (i, i+k-1))/(Vect (i+k)-Vect (i)); ENDENDVECX = Div (1,:);% generating polynomial functions polyfunsyms f t; f = vecx (vecsize); for i = vecsize-1:-1:1 f = vecx (i) + (T-vect (i)) * F;endpolyfun = matlabfunction (f); end
But whatever it is, the results are exactly the same. With the same example:
Vect=[-2, 0, 1]; Vecy=[-27,-1, 0];% command line Input 1, call the new node method [Polyfun, vecx] = NEWNEWTONPI (vect, vecy)% command line input 2, call the bad business method [Polyfun, vecx] = RECNEWTONPI ( Vect, vecy)% command line output 1/2, exactly the same polyfun = Function_handle with the following values: @ (T)-(T.*4.0-1.3E1). * (t+2.0) -2.7e1vecx = - 4
It is easy to test that the polynomial function is the polynomial interpolation function of the original data point.
Polynomial function interpolation: global polynomial interpolation (i) single-line base interpolation, Lagrangian interpolation, Newton interpolation [MATLAB]