a copy of the Latex Chinese template
Previously wanted to find a generic latex template on the Internet to write documents, save time to write the introduction and so on. Later found that the template on the web is either too complex, or a run error, here I provide a common Chinese template. Some macro packages I give in the form of annotations, and when needed, release them.
\documentclass[a4paper]{ctexart}%ctex Report article format \usepackage[top=3cm,bottom=2cm,left=2cm,right=2cm]{geometry}% margin%\ Usepackage{amsmath}% mathematical formula \usepackage{amsthm}%\usepackage{longtable}% long table \usepackage{graphicx}% picture%\usepackage{ TikZ}% Draw%\usepackage{cite}%\usepackage{listings}%\usepackage{amsfonts}%\usepackage{subfigure}%\usepackage{ FLOAT}%\usepackage[colorlinks,linkcolor=black,hyperindex,cjkbookmarks,dvipdfm]{hyperref}%\lstset{language= mathematica}% This command allows the latex to be typeset with the Mathematica key highlighted%\lstset{extendedchars=false}% This command resolves the issue when the code spreads, chapter headings, headers, and other characters that do not appear%\ Usetikzlibrary{shapes,arrows}%tikz graphics library%\usepackage{overpic}% markup%\usepackage{ccaption}% in English caption%\usepackage[ Numbers,sort&compress]{natbib}% parameter represents: number and sort with compression%\bibliographystyle{gbt7714-2005nlang}% The reference format is set to Gbt7714-2005n.bst%\usepackage[draft=false,colorlinks=true,cjkbookmarks=true,linkcolor=black,citecolor=
BLACK,URLCOLOR=BLUE]{HYPERREF}% reference jump, this macro package will automatically load Graphicx%\usepackage{textcomp}% Celsius symbol%\usepackage{ccmap}%pdf Chinese copy %\usepackage{myfont}% font%\usepackage{color}%gnuplot color text%\usepackage{texshade}%texshade, this macro package conflicts with GRAPHICX, so put the last% \usepackage{ Indentfirst}% \setlength{\parindent}{2em}%\makeatletter%%\renewcommand{\chapter} {\endgraf percent \thispagestyle{empty }% page style of chapter Page is ' Plain ' percent \global\ @topnum \z@% prevents figures from going at
Top of page. Percent% \ @afterindenttrue% inserts indent in first paragraph.
Change percent \secdef\ @chapter \ @schapter}% to \ @afterindentfalse to remove indent. %\makeatother%\renewcommand{\textfraction}{0.15}%\renewcommand{\topfraction}{0.85}%\renewcommand{\ bottomfraction}{0.65}%\renewcommand{\floatpagefraction}{0.60}%\author{Lu Song} \begin{document}%\CTEXoptions[ CONTENTSNAME={\BFSERIES\ZIHAO{4} \quad}]%%\ctexsetup[nameformat+={\zihao{3}}]{chapter}%%\CTEXsetup[ Titleformat+={\zihao{3}}]{chapter}%\ctexsetup[number={\arabic{chapter}}]{chapter}%\CTEXsetup[name={,}]{chapter }%\ctexsetup[format={\zihao{4}}]{section}%\ctexsetup[format={\bfseries\zihao{4}}]{subsection}%\ctexsetup[format={\bfseries\zihao{-4}}]{paragraph}%%\ Ctexsetup[beforeskip={0em}]{paragraph}%\ctexsetup[beforeskip={0pt}]{chapter}%\ctexsetup[afterskip={2em}]{ Chapter}%%\ctexsetup[afterskip={0pt}]{subsection}%%\captionwidth{0.8\textwidth}%%\changecaptionwidth%\ Thispagestyle{empty}%%%\pagestyle{plain}%%\newpage%\setcounter{page}{1}%\pagenumbering{roman}%\noindent\ addcontentsline{toc}{section}{Abstract} \begin{center}\zihao{3}\textbf{Newton method in the application of polynomial seeking root}\end{center} \begin{center}\ zihao{5}\textbf{Zhengzhou University, School of Mathematics and Statistics \ Information and computational science \ Lu Song}\end{center}%\begin{center}\zihao{4}\textbf{Summary}\quad\zihao{-4}\end{ Center}%\vspace{1em}%%\noindent\zihao{4}\textbf{keyword}\quad\zihao{-4} minimum rotational surface; Mathematica; spline interpolation; Variational method%%\ TableOfContents%\setcounter{page}{1}%\pagenumbering{arabic}%\pagestyle{headings}%5.5 to 5.8 section mainly introduces polynomial root finding and some typical root-finding methods. There are a lot of ways that we can't imagine.
For example, as we can see, Bauer (1956), Jenkins and Traub (1970), Nickel (1966) and Henrici (1974), these people mention some. % the importance of the general method of determining the root of a general polynomial is sometimes evaluated.High. In practical applications, most of the polynomial is given in a specific form, for example, the characteristic polynomial.
After that, you will learn that the root of the polynomial is the eigenvalues of the Matrix, and further, we will describe this method in detail in the sixth chapter. % we will describe in detail the application of the Newton method in finding the root of a given polynomial $p (x) $. To evaluate Newton's iterative equation,%\[{x_{k{\rm{+}}1}}: = {x_k}-\frac{{p ({x_k})}}{{p ' ({x_k})}}\]% We have to calculate the polynomial and the value of the polynomial first-order guide at point $x=x_k$. Suppose the polynomial is given in the following form%\[p (x) = {A_0}{x^n} + {a_1}{x^{n-1}} + \cdots + {a_n}\]% then $p (x_k) $ and $p ' (x_k) $ can be calculated as follows: for $x = \xi $,%\[
P (\xi) = (\cdots ({a_0}\xi + {a_1}) \xi + {a_2} \xi + \cdots) \xi + {a_n}\]. % for multiplier $\xi$ in this expression, use the following recursive format%\begin{equation}%\label{5.5.1}%\begin{array}{l}%{b_0}: = {a_0}\\%{b_i}: = {B_{i-1}}\xi + {a_i},i =,..., n.%\end{array}%\end{equation}% polynomial $p$ the value in $\xi$ can be given in this way:%\[p (\xi) = {b_n}\]%% by recursive (\ref{5.5.1}) evaluation The algorithm for estimating polynomial is called the Horner method. The amount of $b_i$ we can get is exactly the coefficients of the following polynomial%\[{p_1} (x): = {b_0}{x^{n-1}} + {b_1}{x^{n-2}} + ... + {b_{n-1}}\]% with $x-\xi$ remove $p (x) $, we can Get:%\begin{equation}\label{5.5.2}%p (x) = (x-\xi) p{}_1 (x) + {b_n}%\end{equation}% this is easily confirmed by the coefficients on both sides of the comparison (ref{5.5.2}). Further, to the formula (ref{5.5.2}) about $x$ on both sides of the derivative, and let $x=\xi$, we get%\[p ' (\xi) = {p_1} (\xi) \]% Therefore, the first derivative $p ' (\XI) $ can be determined by repeated use of the Horner method, using the previous result to do the coefficients of the latter equation.
%\[p ' (\xi) = (\cdots ({b_0}\xi + {b_1}) \xi {\rm{+}} \cdots) \xi + {b_{n-1}}\]% normally, polynomial $p (x) $ is generally given in some other format, %\[p (x) = {A_0}{x^n} + {a_1}{x^{n-1}} + \cdots + {A_n}\]% particularly important one case is that $p (x) $ is exactly three characteristic polynomial of the diagonal matrix%\[j = \left[{\begin{array}{ *{20}{c}}%{{\alpha _1}}&{{\beta _2}}&{}&0\\%{{\beta _2}}& \ddots & \ddots &{}\\%{}& \ddots & \ddots &{{\beta _n}}\\%0&{}&{{\beta _n}}&{{a_n}}%\end{array}} \right]\]% of which, ${\alpha _i},{\beta
_i}$ is a real number. In other words, the polynomial is the characteristic matrix%\[{p_i} (x) = \det \left ({\left[{\begin{array}{*{20}{c}}%{{\alpha _1}-X}&{{\beta _2}}&{}& 0\\%{{\beta _2}}& \ddots & \ddots &{}\\%{}& \ddots & \ddots &{{\beta _i}}\\%0&{}&{{\bet A _i}}&{{a_i}-X}%\end{array}} \right]} \right) \]% in principle the order of the principal type. We have a recursive sequence:%\begin{equation}%\begin{array}{l}\label{5.5.3}%{p_0} (x): = 1\\%{p_1} (x): = ({\alpha _1}-X) \cdot 1\\%{p_i} (x): = ({\alpha _i}-X) {P_{i-1}} (x)-{\beta _i}^2{p_{i-2}} (x) \;\;\;\;\;\;\;\;\;\;i = 2,3, \cdots n.\\%p (x): = det (j-xi): = {p_n} (x)%\end{array}%\e Nd{equation}% These formulas can be used to calculate the $p (\XI) $ for any $x=\xi$, and any given matrix element $\alpha_i,\beta_i$. Similarly, we take a derivative of the formula (\ref{5.5.3}) and can push the recursive equation of $p ' (x) $ as follows:%\begin{equation}\label{5.5.4}%\begin{array}{l}%{p_0} ' (x): = 0\\%{p _1} ' (x): =-1\\%{p_i} ' (x): =-{p_{i-1}} (x) + ({\alpha _i}-X) p{' _{i-1}} (x)-{\beta _i}^2p{' _{i-2}} (x) \;\;\;\; \;\;\;\;\;\;i = 2,3, \cdots n.\\%p ' (x): = P{' _n} (x)%\end{array}%\end{equation}% two recursive formulas (\ref{5.5.3}) and (\ref{5.5.4})
Can be computed at the same time. %% through the general discussion of Newton's methods in section 5.3, we know clearly that $x_k$ determined by Newton's method is convergent when and only if the initial point $x_0$ is close enough to 0 points $\xi$. A bad initial value can cause the sequence $x_k$ to diverge, even for the polynomial. If the real polynomial $p (x) $ does not have real roots (for example: $p (x) =x_2+1$), then the Newton method is not possible to converge on the initial values in any real field. For arbitrary polynomial cases, we do not have a universal and insurance method for selecting effective initial values. However, in an important and special case, we do have a common approach. This is the case, if all the root $\xi$ are real ($i =1,2,..., N.) $), and satisfies:%\[{\xi _1} \ge {\xi _2} \ge \cdots \ge {\xi _n}.\]% in 5.6 part, theorem (5.6.5) will prove to us that the polynomial defined by the formula (\ref{5.5.3}) is in the matrix element $\alpha _i,\beta_i$ is the premise of real numberswill have this attribute. %%\newtheorem{theorem}{theorem}%\begin{theorem}\label{5.5.5}% $p (x) $ is a real coefficient polynomial of dimension $n \ge 2$, if for all root $\xi_i$ are real numbers, where%$
{\xi _1} \ge {\xi _2} \ge \cdots \ge {\xi _n}.$% Then, the Newton method can produce a strictly descending sequence $x_0 for any initial value \ge \xi_1$ $x_k$.
%\end{theorem}%\begin{proof}[proof]% without losing its generality, we can assume $p (x_0) > 0$ since $p (X_0) $ for $x>\xi_1$ do not change the number, then for any $x>\xi_1$ we have, %\[p (x) = {A_0}{x^n} + \cdots + {A_n} > 0\]% so we can know $a_0>0$. And through the Lowe theorem, we know that the derivative of $p ' (x) $ has $n-1$ real roots and%\[{\xi _1} \ge {\alpha _1} \ge {\xi _2} \ge {\alpha _2} \ge \cdots \ge {\alpha _{n-1}} \ge {\xi _n}\]% because the dimension of $p ' $ is $n-1$ and large equals 1, the $\al represented above Pha_i$ happens to be the root of it all. Because $a_0>0$ easy to infer, for any $x>\alpha_1$, we have $p ' (x) >0$. Using the Raul theorem again, and re-using conditions $n \ge 2$, we can know%\begin{equation}\label{5.5.6}%\begin{array}{l}%p "(x) > 0\;\;\;x > \alpha {}_1
\ \%p ' (x) \ge 0\;\;\;x \ge \alpha {}_1%\end{array}%\end{equation}% Therefore, for any $x \ge \alpha_1$, $p, p ' $ is a convex function. %% now, due to $p ' (x_k) >0, P (x_k) >0$ $x _k>\xi_i$ implies%\[{x_{k + 1}} = {X_k}-\frac{{p ({x_k})}}{{p ' ({X_k}}} <{X_k}\]% it has yet to be proved that we cannot "overshoot" with Newton's method. From the formula (\ref{5.5.6}), we know $x_k>\xi_1 \ge \alpha_1$, Further, by the Taylor theorem, we can deduce to get%\[0 = P ({\xi _1}) = P ({x_k}) + ({\xi _1}-{x_k}) p ' ({x _k}) + \frac{1}{2}{({\xi _1}-{x_k}) ^2}p "(\delta) > P ({x_k}) + ({\xi _1}-{x_k}) p ' ({x_k}) \;\;\; {\xi _1} < \delta < {x_k}\]% is defined by $x_{k+1}$, move to We can get $p (x_k) =p ' (X_k) (x_k-x_k+1) $, so%\[0 > P ' ({X_k}) ({X_k}-{X_ {k + 1}} + {\xi _1}-{x_k}) = P ' ({X_k}) ({\xi _1}-{x_{k + 1}}) \]% again by $p ' (X_k) >0$, stand $x_k+1>\xi_1$%\end{proof}% for later use, we note Meaning theorem \ref{5.5.6} next result:%\newtheorem{lemma}{lemma}%\begin{lemma}\label{5.5.7}% set $p (x) = {A_0}{x^n} + {a_1}{x^{n-1}} + \c Dots + {a_n}>0,$ is a real-coefficient polynomial with a dimension $n \ge 2$ and all roots are real numbers.
If $\alpha_1$ is the largest root of $p ' $, then for any $x \ge \alpha_1$, there is $p ' (x) \ge 0$, in other words, for any $x \ge \alpha_1$, $p ' $ is a lower convex function. %\end{lemma}%% We still face a problem. That is in advance do not know $\xi_1$ premise, how to find a bigger than $\xi_1$ the initial value $x_0$ it. The following theorem solves this problem:%\begin{theorem}\label{5.5.8}% to any one polynomial $p (x) = {A_0}{x^n} + {a_1}{x^{n-1}} + \cdots + {a_n}$, all its roots $\xi $ both satisfied:%\[\left|{{\xi _i}} \right| \le \max \left\{{\left| {\frac{{{a_n}}}{{{a_0}}}} \right|,1 + \left| {\frac{{{a_{n-1}}}}{{{a_0}}}} \right|, \cdots, 1 + \left| {\frac{{{a_1}}}{{{a_0}}}} \right|} \right\}\]%\[\left| {{\xi _i}} \right| \le \max \left\{{1,\sum\limits_{j = 1}^n {\left| {\frac{{{a_j}}}{{{a_0}}}} \right|} } \right\}\]%\[\left| {{\xi _i}} \right| \le \max \left\{{\left| {\frac{{{a_n}}}{{{a_{n-1}}}} \right|,2\left| {\frac{{{a_{n-1}}}}{{{a_{n-2}}}} \right|, \cdots, 2\left| {\frac{{{a_1}}}{{{a_0}}}} \right|} \right\}\]%\[\left| {{\xi _i}} \right| \le \sum\limits_{j = 0}^{n-1} {\left| {\frac{{{a_{j + 1}}}}{{{a_j}}}} \right|} \]%\[\left| {{\xi _i}} \right| \le 2\max \left\{{\left| {\frac{{{a_1}}}{{{a_0}}}} \right|,\sqrt {\left| {\frac{{{a_2}}}{{{a_0}}}} \right|} , \sqrt[3]{{\left| {\frac{{{a_3}}}{{{a_0}}}} \right|}} \cdots, \sqrt[n]{{\left| {\frac{{{a_n}}}{{{a_0}}} \right|}}} \right\}\]%\end{theorem}% of these differences will be shown in section 6.9, as well as a comparison with Househoder (1970).
Other differences will be found in Marden (1949).
%% Two convergence does not mean fast convergence. If the initial value selected is very far from the root, then the Newton method may converge very slowly at the beginning. In fact, if $x_k$ is large enough, then%\[{x_{k + 1}} = {X_k}-\frac{{x_k^n + \cdots}}{{nx_k^{n-1} + \cdots}} \approx {X_k}\left ({1-\ Frac{1}{n}} \right) \]% resulting in very small variations between $x_k$ and $x_{k+1}$. These results led us to seek a better "double-step" instead of the direct Newton method, as follows:%\[{x_{k + 1}} = {X_k}-2\frac{{p ({x_k})}}{{p ' ({x_k})}}\;\;\;k = 0,1,2, \ldots \]%% Of course, we now have the risk of "overshoot". In particular, in the case where the polynomial is only real roots and the initial value $x_0 \ge \xi_1$, some $x_{k+1}$ of the iteration may cross the maximum one root $\xi_1$, thus losing the advantage of the theorem \ref{5.5.5}. But do not be afraid, this "overshoot" caused by the non-convergence can be eliminated. Due to some good properties of polynomial, we can find a good new initial value $y (\xi_1 \ge > \xi_2) $, and then iterate with Newton method, and finally can converge. The result of the following theorem is given:%\begin{theorem}\label{5.5.9}% set $p (x) = {A_0}{x^n} + {a_1}{x^{n-1}} + \cdots + {a_n}>0,$ is a dimension $n \ge 2$ and all roots are real coefficient polynomial of real number. And the root of $p (x) $ is sorted as follows, ${\xi _1} \ge {\xi _2} \ge \cdots \ge {\xi _n}.$ Additionally, if $\alpha_1$ is $p ' $ max root,%\[{\xi _1} \ge {\alpha _1} \ GE {\xi _2}\]% for $n=2$, we need special requirements $\xi_1 \ge \xi_2$, so for each $z>\xi_1$, some symbols are defined as follows:%\[z ': = z-\frac{{p (z)}}{{p ' (z)}},y: = Z -2\frac{{p (z)}}{{p ' (z)}},y ': = y-\frac{{p (y)}}{{p' (y)}}\]% (Figure \ref{figure 8} Nicely depicts the problem), the following conclusions are drawn:%\[\begin{array}{l}\label{5.5.10}%{\alpha _1} < y\\%{\xi _1} \le y ' \le Z '%\end{array}\]%\begin{figure}[!h]%\small%\centering%\includegraphics[width=12cm]{111.eps}%\caption{Geometric Interpretation of the Double-step method} \label{figure 8}%\end{figure}%\end{theorem}% easy to prove when $n=2$ and $\xi_1=\xi_2$, It also means that for any $z>\xi_1$, there is $y=\xi_1$%\begin{proof}[proof]% is again assumed when $z>\xi_1$, $p (z) >0$. For such a $z$ value, we assume that there are two two volume $\delta_0,\delta_1$ (as shown in Figure \ref{figure 8}) defined as follows:%\[{\delta _0}: = P (z ') = P (z ')-P (z)-(z '-Z) p ' (z) = \int_z^{z '} {\left[{p ' (t)-P ' (z)} \right]} dt\]%\[{\delta _1}: = P (z ')-P (y)-(z '-y) p ' (y) = \int_y^{z '} {\left[{P
' (t)-P ' (y)} \right]} Dt\]% of course, these two quantities can also be interpreted separately as the two-block area on the graph of $p ' (x) $, see Figure \ref{figure 9}. %\BEGIN{FIGURE}[!H]%\small%\centering%\includegraphics[width=12cm]{222.eps}%\caption{the quantities $\Delta_0$ and $\delta_1$ interpreted as areas} \label{figure 9}%\end{figure}% through lemma \ref{5.5.7}, $p ' (x) $ in $x \ge \alpha_1$ is a lower convex function. BecauseThis, we know $z '-y=z-z ' >0$ (by the theorem \ref{5.5.5} know it is positive), we can get%\begin{equation}\label{5.5.11}%{\delta _1} \le {\delta _0}\;\ ; \;y \ge {\alpha _1}%\end{equation}%$\delta_0=\delta_1$ when and only if $p ' $ is a linear function, that is, $p$ is a polynomial of number 2. Now we are divided into $y>\xi_1,y=\xi_1,y<\xi_1$ three kinds of situations to discuss. For the $y>\xi_1$, by the theorem \ref{5.5.5}, the state is proven. For $y=\xi_1$, we first prove that $\xi_2<\alpha_1<\xi_1$, that is to say $\xi_1$ is a single root of $p$. If $y=\xi_1=\xi_2=\alpha_1$ is a complex root, then, by $n \ge 3$ hypothesis, by the type \ref{5.5.11}, we can get, $\delta_1<\delta_0$. This will export contradictions%\[{\delta _1} = P (z ')-p ({\xi _1})-(Z '-{\xi _1}) p ' ({\xi _1}) = P (z ') < {\delta _0} = P (z ') \]% thus, $\xi_1$ one It is a single set.
So $\xi_2<\xi_1<y<\alpha_1$, so in the second case, the conclusion seems to be right. %% now only the $y<\xi_1$ is left in this case. If $alpha_1<y$, then the proposition correctness can be established by the following method. Since $p (x) >0$ and $\xi_2<\alpha_1<y<\xi_1$, we can get $p (y) <0,p ' (y) >0.$, especially $y$ is well-suited to the situation. Further, because $p (y) = (y-y ') p ' (Y) $ and $\delta_0 \ge \delta_1$, we have%\[{\delta _0}-{\delta _1} = P (y) + (z '-y) p ' (y) = P ' (y) (z '-y ') \ge 0\]% so $z ' \ge y ' $, Taylor unfold, finally get%\[p ({\xi _1}) = 0 = P (y) + ({\xi _1}-y) P ' (Y) + \frac{1}{2}{({\xi_1}-y) ^2}p "(\delta) \;\;\;y < \delta < {\xi _1}\]% further, because when $x \ge \alpha_1$, $p ' (x) \ge 0$, and $p (y) = (y-y ') p ' (Y) $ and $ P ' (y) >0$%\[0 \ge p (y) + ({\xi _1}-y) P ' (y) = P ' (y) ({\xi _1}-y ') \]% so $y ' \ge \xi_1$% in order to complete the proof, we go on to prove that for any $z>\x i_1$, there is such a conclusion%\begin{equation}\label{5.5.12}%y = y (z) > {\alpha _1}%\end{equation} We are again divided into two cases.
$\xi_1>\alpha_1>\alpha_2$ and $\xi_1=\alpha_1=\xi_2$.
% if $\xi_1>\alpha_1>\alpha_2$, then by \ref{5.5.12}, in any case%\[{\xi _1} < Z < {\xi _1} + ({\xi _1}-{\alpha _1}) \] % this is because theorem \ref{5.5.5} implies $z>z ' \ge \xi_1$, so defined by $y=y (z) $, available%\[y = Z '-(z-z ') > {\xi _1}-({\xi _1}-{\alpha _1}) = {\alpha _1}\]% Thus we can select a $z_0$ to make $y (Z_0) >\alpha_1$. Suppose there is a $z_1>\xi_1$ that makes $y (z_1) \le \alpha_1$, by the theorem of the mean value of the continuous function, there is a $ \overline Z \in [{z_0},{z_1}]$, making $\overline y=y (\overline z) =\alpha_1$, from \ref{5.5.11}, for $z=\overline z$%\[{\Delta _1} = P (\overline z ')-p (\overline y)-(\overline z '-\overline y) P ' (\overline y) = p (\overline z ')-p (\overline y)\le {\delta _0} = P (\overline z ') \]% further $p (\overline y) =p (Alpha) \ge 0$.
On the other hand, by $\xi_1$ is a single in our example, resulting in $p (x) $ variable, which is contradictory, so the style \ref{5.5.12} must be set up for all $z>\xi_1$. % if $\xi_1=\alpha_1=\xi_2$, then by assuming $n \ge 3$, without losing generality, we assume%\[p (x) = {A_0}{x^n} + {a_1}{x^{n-1}} + \cdots + {a_n}\]%%\[ Z ' = z-\frac{{p (z)}}{{p ' (z)}} = Z-\frac{z}{n}\frac{{1 + \frac{{{a_1}}}{z} + \cdots + \frac{{{a_n}}}{{{z^n}}}}}{{1 + \frac{{n-1}}{n}\frac{{{a_1}}}{z} + \cdots + \frac{{{a_{n-1}}}}{{n{z^{n-1}}}}} = Z-\frac{z}{n}\left ({1 + O (\fra C{1}{z})} \right) \]% thereby%\[y = Y (z) = z + 2 (z '-Z) = Z-\frac{{2z}}{n}\left ({1 + O (\frac{1}{z})} \right) = Z\left ({1- \frac{2}{n}} \right) + O (1) \]% because $n \ge 3$, the value of $y (z) $ also increases infinitely as the $z$ increases to positive infinity. Therefore, we also infer once again that there is a $z_0>\xi$, which makes $y_0=y (Z_0) >\alpha_1$. If not for all $z>\xi_1$, type \ref{5.5.12}, then we can deduce as before,
There is a $\overline z$, which makes $y=y (z) =alpha_1$, however in the preceding part of the proof, we already know that $\overline y=\alpha_1=\xi_1=\xi_2$ is impossible. %\end{proof}%% of the practical meaning of this theorem we say so. If we start with the initial value $x_0>\xi_1$, then the "two-step"%\[{x_{k + 1}} = {X_k}-2\frac{{p ({x_k})}}{{p ' ({x_k})}}\]% generated value satisfies%\[{x_0} \ge {x_1} \ge \cdots \ge {x_k} \ge {x_{k + 1}} \ge \c Dots \ge {\xi _1}\;\;\;\mathop {\lim}\limits_{k \to \infty} {X_k} = {\xi _1}\]% or, there is an initial value of $x_{k_0}:=y$, which satisfies%\[p ({x_0}) p (
X{}_k) > 0\;\;\;0 \le k < {k_0}\]%\[p ({x_k}) p ({x_{{k_0}}) < 0\]% in this case, all values $p_{x_k}$ are the same number. %\[p ({x_0}) p (x{}_k) \ge 0\]% $x _k$ quickly and directly monotonically converge to the root $\xi_1$ (certainly faster than using Newton's method directly). Second case,%\[{x_0} > {x_1} > \cdots > {x_{{k_0}-1}} > {\xi _1} > y = {x_{{k_0}}} \ge {\alpha _1} \ge {\xi
_2}\]% Use $y_0:=y$ Newton method sub-sequence new starting point%\[{y_{k + 1}} = {Y_k}-\frac{{p ({y_k})}}{{p ' ({y_k})}}\;\;\;k = 0,1, \ldots \]% so it can also be monotonically convergent %\[{y_1} \ge {y_2} \ge \cdots \ge {\xi _1}\;\;\;\mathop {\lim}\limits_{k \to \infty} {Y_k} = {\xi _1}\]% since the polynomial has been found most The big root, further, we are of course to find the other root of the polynomial. The following method tells us that we can "remove" the root $\xi_1$ that have been obtained. That is to say, I can form a $n-1$ polynomial%\[{p_1} (x): = \frac{{p (x)}}{{x-{\xi _1}}}\]% This process is called "descending." $p _1 (x) $ The largest root is $\xi_2$, which can also be obtained by the method described earlier. Here's the $\xi_1$, or a better value $y=x_{k_0}$ (obtained by the first "overshoot"),Can be used as the initial value of an iteration.
And so on, all the roots can be found at the end. % of course, in general, the "descending law" is not perfect. Because the rounding error will bring $p_1 (x) to a certain degree of wear. In fact, to replace $p_1 (x)