"Linear algebra" 06-jordan standard type

Source: Internet
Author: User
Tags greatest common divisor in domain

Now it's time to study how to divide space into invariant subspace, and the hardest thing is that we don't know where to start. You might want to split a piece of space from the loop subspace, but the existence and uniqueness of the scheme cannot be solved. Invariant subspace partitioning requires not only that each subspace \ (V ' \) is invariant, but also implies that the image of an element other than \ (V ' \) does not fall in the \ (V ' \), which results in a scenario where splitting from a local start is not feasible. In addition, this method does not guarantee the uniqueness of the segmentation, because the segmentation process relies on the selection of each subspace.

1.0 polynomial

It seems that we have to go from the big picture and expect to find a property that will perfectly divide the space. Then first place the entire space \ (v\) under a property of \ (\mathscr{a}\) and then subdivide it by this property. It's hard to get this step out of the way, and it must not have been a long time in history. We have made some simple bedding in the front, the most important one is the invariant subspace of the transformed polynomial. You may have asked yourself if there was an identity for the general transformation. This is even better if you can find this equation in a polynomial.

The idea is very good, but in the direction of the conclusion need a clever structure, I do not know how mathematicians get, after all, their quality is not enough. Looking back at the feature matrix \ (\lambda i-a\), you can either think of it as a polynomial of matrix coefficients or as a matrix of elements in a polynomial. But in all variants, in fact our default \ (\lambda\) is the element in domain \ (k\), not an arbitrary indefinite element. Therefore, the equation of deformation can not be taken as a general polynomial, especially can not be arbitrarily used in a matrix into the equation, it must be clear.

Fortunately, there is a special case in which a matrix can be used in a polynomial equation. Examine any equation (1) of the characteristic matrix, expand the left-hand and correspond to the right-type, and get a series of equations (2). Each side of the equation is multiplied by \ (i,a,a^2,\cdots\) and added, resulting in a \ (0=f (A) \), which is as if the Matrix \ (a\) into the equation (1). But this kind of generation is generally very difficult to set up, it is due to the special form of the characteristic matrix, we can take this interesting nature as a conclusion,

\[(\lambda i-a) g (\LAMBDA) = (\lambda i-a) (\lambda^mb_m+\lambda^{m-1}b_{m-1}+\cdots+b_0) =\lambda^nc_n+\lambda^{n-1} C_{n-1}+\cdots+c_0=f (\LAMBDA) \tag{1}\]

\[-ab_0=c_0;\; b_0-ab_1=c_1;\; B_1-ab_2=c_2;\cdots b_{n-1}-ab_n=c_n;\; B_n-ab_{n+1}=0;\cdots B_m=0\tag{2}\]

In particular, Fetch \ (g (\LAMBDA) \) is the adjoint matrix of \ (\lambda i-a\), the right side of the equation is \ (\varphi (\LAMBDA) i\), thus having the hamilton-caylay theorem (equation (3)). The linear transformation of the theorem is \ (\varphi (\mathscr{a}) =0\), the formula for the entire space \ (v\) is set up, or (v\) is the transformation of the kernel (\varphi (\mathscr{a}) \), we start from here to find further conclusions.

\[\varphi (\LAMBDA) =|\lambda I-a|\quad\rightarrow\quad \varphi (A) =0\tag{3}\]

More generally, the polynomial that satisfies \ (f (\mathscr{a}) =0\) is called \ (\mathscr{a}\) of the 0 polynomial , where the least number of first polynomial is called the minimum polynomial of \ (\mathscr{a}\), Record as \ (d (\LAMBDA) \). These definitions are also established for matrices, and it is clear that the least polynomial is also a invariant of similar transformations. Similar to the analysis in abstract algebra, it is easy to know that the minimum polynomial is unique and that it divides all of the 0 polynomial, thus having \ (d (\LAMBDA) \mid \varphi (\LAMBDA) \).

There is also an interesting application of the characteristic polynomial and the smallest polynomial, first giving them a uniform form \ (f (\LAMBDA) =\lambda^n+a_{n-1}\lambda^{n-1}+\cdots+a_0\), then \ (f (a) =0\). For the invertible matrix \ (a\), easy to know \ (A_0\ne 0\), move \ (a_0i\) to the right of the equation, the left extract \ (a\) has the equation (4) is established. According to this equation it is easier to calculate \ (a^{-1}\).

\[a (a^{n-1}+a^{n-1}a^{n-2}+\cdots+a_1i) =a_0i\tag{4}\]

   verification: The characteristic polynomial of the cyclic subspace is the minimum polynomial, and the characteristic polynomial of Chinese (18) is obtained.

2. Root Space

More generally, we examine any polynomial \ (f (\LAMBDA) \), set it to have coprime decomposition \ (f (\LAMBDA) =f_1 (\LAMBDA) f_2 (\LAMBDA) \), that is, the formula (5) is established. To investigate invariant subspace \ (W=\text{ker}\,f (\mathscr{a}) \) and \ (W_i=\text{ker}\,f_i (\mathscr{a}) \), there is obviously a \ (W_i\subseteq w\) in the first place. For any \ (\alpha\in w\), there \ (F_1 (\mathscr{a}) f_2 (\mathscr{a}) (\alpha) =0\), and then by the formula (5) know \ (\alpha\) can be decomposed by type (6), but obviously \ (\alpha_i\ In w_i\), so there \ (w=w_1+w_2\).

\[g_1 (\LAMBDA) f_1 (\LAMBDA) +g_2 (\LAMBDA) f_2 (\LAMBDA) =1\tag{5}\]

\[\alpha=g_1 (\mathscr{a}) f_1 (\mathscr{a}) (\alpha) +g_2 (\mathscr{a}) f_2 (\mathscr{a}) (\alpha) =\alpha_2+\alpha_1\ Tag{6}\]

Now set \ (\beta\in w_1\cap w_2\), again with the formula (5) there \ (\beta=g_1 (\mathscr{a}) f_1 (\mathscr{a}) (\beta) +g_2 (\mathscr{a}) f_2 (\ Mathscr{a}) (\beta) =0\), thus \ (W_1\cap w_2=0\), which means \ (W=w_1\oplus w_2\). In this summary, if \ (f (\LAMBDA) \) has coprime decomposition \ (f_1 (\LAMBDA) f_2 (\LAMBDA) \cdots f_s (\LAMBDA) \), then there is a formula (7) is established.

\[\text{ker}\,f (\mathscr{a}) =\text{ker}\,f_1 (\mathscr{a}) \oplus\text{ker}\,f_2 (\mathscr{a}) \oplus\cdots\oplus\ text{ker}\,f_s (\mathscr{a}) \tag{7}\]

Now the minimum polynomial \ (d (\LAMBDA) \), in the algebraic Closed field (complex field) has coprime decomposition (8), the formula (7) is applied to the equation (8) in the form (9) is established. where \ (w_i\) are invariant subspace, this finds the division we want. Although this partition guarantees the existence and uniqueness, but has not yet reached the minimum segmentation, the similarity matrix has not found the simple standard type, this task to the next paragraph to solve.

\[d (\LAMBDA) = (\lambda-\lambda_1) ^{r_1} (\lambda-\lambda_2) ^{r_2}\cdots (\lambda-\lambda_s) ^{r_s}\tag{8}\]

\[v=w_{\lambda_1}\oplus w_{\lambda_2}\oplus\cdots\oplus W_{\lambda_s},\quad w_{\lambda_i}=\text{ker}\, (\mathscr{A }-\lambda_i\mathscr{i}) ^{r_i}\tag{9}\]

There are some details we need to discuss again, what is the relationship between the minimum polynomial and the characteristic polynomial? What is the smallest minimum? What does the weight of a characteristic polynomial root mean? First of all, w_{\lambda_i}\ is not zero, otherwise \ (d (\LAMBDA) \) removed \ ((\lambda-\lambda_i) ^{r_i}\) is still a 0 polynomial, which contradicts the least polynomial. \ (w_{\lambda_i}\) not 0 is equivalent to saying \ (a-\lambda_i i\) is not full-rank, and thus \ (\lambda_i\) is the eigenvalues of \ (a\). Conversely, according to the formula (9), \ (\lambda_1,\cdots,\lambda_s\) contains all the eigenvalues of \ (a\), otherwise the straight and cannot contain all the feature subspace. Thus the minimum polynomial has exactly the same root as the characteristic polynomial, and the weight of the characteristic polynomial root is not less than the weight of the minimum polynomial root (equation (10)).

\[\varphi (\LAMBDA) = (\lambda-\lambda_1) ^{t_1} (\lambda-\lambda_2) ^{t_2}\cdots (\lambda-\lambda_s) ^{t_s},\quad t_i\ Geqslant R_i>0\tag{10}\]

Now set \ (U_k=\text{ker}\, (\mathscr{a}-\lambda_i\mathscr{i}) ^k\), obviously \ (u_1\) is the characteristic subspace of \ (\lambda_i\), and there \ (U_1\subseteq U_ 2\subseteq u_3\subseteq\cdots\). This sequence does not increment infinitely, and it is easy to prove that once the equation is set up, the equation will always be u_m=u_{m+1}\. If \ (m>r_i\), then \ (U_m\supset w_{\lambda_i}\) and \ (u_m\) have no intersection with other \ (w_{\lambda}\), this contradicts the formula (9). If \ (m<r_i\), then \ (u_m=w_{\lambda_i}\), bring in the formula (9) Easy to prove, will \ (d (\LAMBDA) \) \ ((\lambda-\lambda_i) ^{r_i}\) replaced by \ ((\lambda-\ lambda_i) ^m\ is still a 0 polynomial, which contradicts the minimum polynomial.

Thus there is a m=r_i\, which finds the meaning of the weight of the smallest polynomial root (equation (11)), which is also called \ (w_{\lambda_i}\) as the "\lambda_i\" of the Roots space . Précis-writers \ (w_{\lambda_i}\) to \ (w\), obviously linear transformation \ (\mathscr{a}\) under \ (w\) is also a linear transformation, and the minimum polynomial defined by the definition of the limit is \ ((\mathscr{a}|_w\ lambda_i) ^{r_i}\). It is known from the formulas and (9) and (10) that the characteristic polynomial of \ (\mathscr{a}|_w\) is exactly \ ((\lambda-\lambda_i) ^{t_i}\), thus indirectly stating that the dimension of \ (w_{\lambda_i}\) is \ (t_i\), This is the meaning of the weight of the characteristic polynomial root (equation (12)).

\[\text{ker}\, (\mathscr{a}-\lambda_i\mathscr{i}) \subset\cdots\subset\text{ker}\, (\MATHSCR{A}-\LAMBDA_I\MATHSCR {I}) ^{r_i}=\text{ker}\, (\mathscr{a}-\lambda_i\mathscr{i}) ^{r_i+1}=\cdots\tag{11}\]

\[\dim\,w_{\lambda_i}=t_i\tag{12}\]

3. Power 0 Transform

As defined by \ (w\), \ (r_i\) is the smallest integer \ (k\) that makes \ ((\mathscr{a}-\lambda_i\mathscr{i}) ^k|_w=0\). For this we define a linear transformation satisfying \ (\mathscr{a}^r=0,\mathscr{a}^{r-1}\ne 0\) to a \ (r\) Power 0 Transformation , thus \ (\mathscr{a}-\lambda_i\mathscr{i }|_w\) is the \ (r_i\) Power 0 transformation. If we can find the simple similarity matrix \ (s\) of the Power 0 transformation, then we can have a simple similarity matrix \ (a\) (S+\lambda_i i\), and we will start to solve this problem.

The power 0 transform is still a power zero under any subset, so the smallest polynomial of any invariant subspace is the form of \ (\lambda^m=0\). In particular, the characteristic polynomial and the least polynomial of the \ (m\)-order loop subspace are \ (\lambda^m=0\), so that the cyclic subspace is also called the Strong-loop sub-space . It is easy to know that the transformation matrix of the strong cyclic subspace is the formula (13), and that its \ (k\) power is exactly the i_n\ of the diagonal to the right (k\), so there is a \ (\text{rank}\,j_n^k=n-k\), until \ (j_n^n=0\).

\[j_n=\begin{bmatrix}0&1&&\\&\ddots&\ddots&\\&&\ddots&1\\&&&0\ End{bmatrix}\tag{13}\]

Set \ (\mathscr{a}\) is \ (n\) dimension space \ (v\) on the \ (m\) Power 0 transformation, when \ (n=1\) obviously have \ (\mathscr{a}=0\), the conclusion is more mundane. When \ (n=2\), \ (m\) can take \ (1\) or \ (2\), \ (m=1\) for trivial transformations \ (\mathscr{a}=0\). \ (m=2\) exists \ (\alpha\) makes \ (\mathscr{a}\alpha\ne 0\), and \ (\alpha,\mathscr{a}\alpha\) linearly independent, so \ (v\) is actually a \ (2\) dimension strong loop subspace. In turn, see \ (m=1\) situation, \ (v\) is actually two \ (1\) dimensional strong cycle of the straight and the subspace. The summary is: \ (n\) dimension space under the Power 0 transformation can be decomposed into \ (l\) A strong cyclic subspace, where \ (l\) is the dimension of \ (\text{ker}\,\mathscr{a}\).

Set the above conclusion to the \ (k<n\) dimensional space, the following inductive method to conclude that the n\ dimension space is also established. First remember \ (w=\text{ker}\,\mathscr{a}\), the \mathscr{a}=0\ of the ordinary scene conclusion clearly established, when \ (\mathscr{a}\ne 0\) (W\ne 0\) and Dimension \ (s<n\). It is easy to prove that the induced transformation of \ (v/w\) on \ (\mathscr{a}\) is also a power 0 transformation, which is assumed by inductive, it is like the straight and decomposition of the formula (14). And from the previous conclusion we know that the accompanying set of the representative and \ (w\) of the base is just \ (v\) a complete set of bases, so the formula (15) was established.

\[\left<\mathscr{a}^{s_1-1}\alpha_1+w,\cdots,\alpha_1+w\right>\oplus\cdots\oplus\left<\mathscr{a}^{s_ T-1}\alpha_t+w,\cdots,\alpha_t+w\right>\tag{14}\]

\[v=w\oplus U,\quad u=\left<\mathscr{a}^{s_1-1}\alpha_1,\cdots,\alpha_1,\cdots,\mathscr{a}^{s_t-1}\alpha_t,\ Cdots,\alpha_t\right>\tag{15}\]

Since each subset in the formula (14) is strongly cyclic subspace under the induced mapping, there is a \ (\mathscr{a}^{s_i}\alpha_i\in w\). Investigate their relevance, set \ (k_1\mathscr{a}^{s_1}\alpha_1+\cdots+k_t\mathscr{a}^{s_t}\alpha_t=0\), that is, the formula (16) is established, so \ (\beta\in W\), and obviously \ (\beta\in u\). By the formula (15) known \ (\beta=0\), so \ (k_i=0\), \ (\mathscr{a}^{s_i}\alpha_i\) linearly independent.

\[\mathscr{a}\beta=0,\quad \beta=k_1\mathscr{a}^{s_1-1}\alpha_1+\cdots+k_t\mathscr{a}^{s_t-1}\alpha_t\tag{16}\]

Expand \ (\mathscr{a}^{s_i}\alpha_i\) to \ (w\) a set of base \ (\mathscr{a}^{s_1}\alpha_1,\cdots,\mathscr{a}^{s_t}\alpha_t,\gamma_ 1,\cdots,\gamma_r\), taking into account \ (\left<\mathscr{a}^{s_i}\alpha_i,\cdots,\alpha_i\right>\) and \ (\left<\gamma_j\ right>\) are strongly cyclic subspace, so \ (v\) can be decomposed into such (17) strong-cycle subspace of the straight and. More generally described as a formula (18) (19), where the order of each strong-loop subspace \ (s_i+1\) is not greater than the number of power 0 transformations \ (m\).

\[v=\left<\mathscr{a}^{s_1}\alpha_1,\cdots,\alpha_1\right>\oplus\cdots\oplus\left<\mathscr{a}^{s_t}\ Alpha_t,\cdots,\alpha_t\right>\oplus\left<\gamma_1\right>\oplus\cdots\oplus\left<\gamma_r\right >\tag{17}\]

\[v=\left<\mathscr{a}^{s_1}\alpha_1,\cdots,\alpha_1\right>\oplus\cdots\oplus\left<\mathscr{a}^{s_l}\ Alpha_l,\cdots,\alpha_l\right>\tag{18}\]

\[l= \dim{(\text{ker}\,\mathscr{a})}=n-\text{rank}\,\mathscr{a}\tag{19}\]

Further, according to the characteristics of \ (j_n\), we can actually obtain the number of the (K, (1\leqslant k\leqslant m) \) Order loop subspace (n (k) \). First of all obviously there is a formula (20) of the series equation, through a simple calculation can be obtained formula (21), this formula shows that the power 0 matrix decomposition of the number of cyclic subspace and the number of times are determined, it can be said that this decomposition is unique.

\[\text{rank}\,\mathscr{a}^0=n (1) \cdot 1+n (2) \cdot 2+\cdots+n (m) \cdot m\\\text{rank}\,\mathscr{a}^1=n (2) \cdot 1+N ( 3) \cdot 2+\cdots+n (m) \cdot (m-1) \\\text{rank}\,\mathscr{a}^2=n (3) \cdot 1+n (4) \cdot 2+\cdots+n (m) \cdot (m-2) \\\cdots \quad\cdots\tag{20}\]

\[n (k) =\text{rank}\,\mathscr{a}^{k-1}+\text{rank}\,\mathscr{a}^k-2\,\text{rank}\,\mathscr{a}^{k+1}\tag{21}\]

4. Jordan standard type and its Calculation 4.1 Jordan standard type

Now return to the linear space \ (v\) decomposition under the general linear transformation \ (\mathscr{a}\), it is known that it can be decomposed according to the eigenvalue of several root space \ (w_{\lambda_i}\), and the root space in the transformation \ (\mathscr{a}-\ Lambda_i\mathscr{i}\) is also a power 0 transformation. The decomposition of the Power 0 transform is also completely solved, combining these two decomposition is easy to know, linear transformation \ (\mathscr{a}\) matrix similar to the following matrix. The diagonal is a characteristic value, the number of each eigenvalue is its algebraic weight, minus the diagonal is corresponding to the Power 0 transformation decomposition.

\[a\sim \begin{bmatrix}j_{n_{11}} (\lambda_1) &&&&&&\\&\ddots&&&& &\\&&j_{n_{1k_1}} (\lambda_1) &&&&\\&&&\ddots&&&\\&& &&J_{N_{S1}} (\lambda_s) &&\\&&&&&\ddots&\\&&&&& &j_{n_{sk_s}} (\lambda_s) \end{bmatrix},\quad j_n (\LAMBDA) =\begin{bmatrix}\lambda&1&&\\&\ Ddots&\ddots&\\&&\ddots&1\\&&&\lambda\end{bmatrix}_{n\times N}\tag{21}\]

This matrix is called the Jordan Standard , where each matrix block \ (j_n (\LAMBDA) \) is also called the Jordan block . However, it is important to note that when we discuss the complete decomposition of the root space, it is done in the algebraic closed domain (plural field), so it is only possible to say that any matrix is similar to a Jordan standard in the algebraic closed field. But in fact, for the specific matrix, this condition can be weakened to: the domain is the transformation of the characteristic polynomial of the normal expansion of the domain (in the domain can be completely decomposed).

4.2 \ (\lambda\)-Matrices and elementary factors

So how do you find Jordan's standard type? We need to get all the eigenvalues first, then use the formula (21) to find the order of each Jordan block, the process will not be detailed. The computational complexity of this method is high, and we need to study other methods. By conclusion we already know that the Jordan standard is determined by the eigenvalues and the order of each Jordan block (n_{ik_j}\), which are the whole-system invariants in the similarity of matrices. To get the standard type, you need to design a full-line invariant, which contains all of these parameters.

The closest amount imaginable now is the characteristic polynomial (10), which contains all eigenvalues and the algebraic weight of each eigenvalue, and to get more complete parameters, we might as well put our gaze on the source of the characteristic polynomial: the characteristic matrix \ (\lambda i-a\). For this purpose, we discuss the more general matrix \ (A (\LAMBDA) \) as the element (F (\LAMBDA) \) on the domain \ (f\), and call it the \ (\lambda\)-matrix . Such a matrix can also define its rank and inverse matrix, except that the inverse matrix exists only when its determinant is constant.

\ (\lambda\)-matrices can also perform elementary transformations and define elementary matrices: \ (P (i,j), P (i,j (\LAMBDA)), p (I (k)) \), but note that in order for the elementary transformation to be reversible, each row (column) is allowed to be multiplied by a non-0 constant \ (k\in f\). In abstract algebra, we know that the polynomial ring on the domain is a European ring, which can be defined as greatest common divisor (the first coefficient is \ (1\)), and can also be divided. Set \ (A (\LAMBDA) \) Non-0 elements of greatest common divisor (d_1 (\LAMBDA) \), you can prove that the elementary transformation can convert \ (a (\LAMBDA) \) to \ (\begin{bmatrix}d_1 (\LAMBDA) &0\\0 &a_1 (\LAMBDA) \end{bmatrix}\), where the elements of \ (a_1 (\LAMBDA) \) are multiples of \ (d_1 (\LAMBDA) \). Continue this process to convert \ (A (\LAMBDA) \) to the following Smith Standard , where \ (r\) is obviously the rank of \ (a (\LAMBDA) \).

\[\begin{bmatrix}d_1 (\LAMBDA) &&&\\&\ddots&&\\&&d_r (\LAMBDA) &\\&& &0\end{bmatrix},\quad d_i (\LAMBDA) \mid d_{i+1} (\LAMBDA) \tag{22}\]

Similar to the general matrix, we will be able to transform each other by elementary transformations of the \ (\lambda\)-matrix called offset , so any \ (\lambda\)-matrix is offset to the matrix in the formula (22). In addition, obviously reversible \ (\lambda\)-matrix offset to the unit matrix, that is, reversible \ (\lambda\)-matrix can be decomposed into a series of elementary matrix multiplication, so that \ (A (\LAMBDA) \) and \ (B (\LAMBDA) \) offset is actually equivalent to: existence reversible \ (\ lambda\)-matrix \ (P (\LAMBDA), Q (\LAMBDA) \), which makes the formula (23) established.

\[b (\LAMBDA) =p (\LAMBDA) A (\LAMBDA) Q (\LAMBDA) \tag{23}\]

For the balance of the general matrix, rank \ (r\) fully determines an equivalence class, which is the whole system invariants of the offset matrix. Since the elementary transformation does not change the greatest common divisor of the elements, the \ (d_i (\LAMBDA) \) in the formula (22) is actually deterministic, and they are also offset from the whole system invariants of the Matrix (\lambda\)-The invariant factor of the matrix. Now go back to the feature matrix \ (A (\LAMBDA) =\lambda i-a_n\), and set \ (a\) element in algebraic closed (f\), because its determinant \ (\varphi (\LAMBDA) \) nonzero, so \ (A (\LAMBDA) \) is full rank. The formula (24) is established because the determinant of the Matrix (\lambda\) is not changed.

\[\varphi (\LAMBDA) =d_1 (\LAMBDA) d_2 (\LAMBDA) \cdots d_n (\LAMBDA), \quad d_i (\LAMBDA) = (\lambda-\lambda_1) ^{e_{i1}}\ Cdots (\lambda-\lambda_s) ^{e_{is}}\tag{24}\]

First of all obviously there is (25) the left-hand form (\ (t_i\) for \ (\lambda_i\) algebraic weight), and then by \ (D_i (\LAMBDA) \mid d_{i+1} (\LAMBDA) \) There are (25) right-form. Since \ (d_i (\LAMBDA) \) is a full-line invariant, all \ ((\lambda-\lambda_j) ^{e_{ij}}\) is actually completely deterministic, not the Elementary factor of \ (1\) those items are called feature matrices. It is obvious that the set of all elementary factors is also the whole system invariant of the characteristic matrix, which is called the Elementary Factor group .

\[t_i=e_{1i}+e_{2i}+\cdots+e_{ni},\quad e_{1i}\leqslant e_{2i}\leqslant\cdots\leqslant e_{ni}\tag{25}\]

4.3 Similarities and offsets

Now you may be bright, what does the elementary factor have to do with the Jordan block? Are they equivalent to one by one? We have so much effort to discuss the elementary factors, which are of course purposeful. As you can expected, there is a correspondence between them, and we need two conclusions to get this relationship.

First look at the Jordan standard type of the elementary factor is what, in the discussion only a simple elementary transformation can be, the process is not elaborate. The first step is to prove that the elementary factor of the Jordan block \ (j_n (\LAMBDA_0) \) is only \ ((\LAMBDA-\LAMBDA_0) ^n\), and the second step proves the chunked diagonal matrix \ (\begin{bmatrix}a&\\&b\end The elementary factor of {bmatrix}\) is the sum of the elementary factor of \ (A,b\), and the third step is to deduce that the elementary factor of Jordan standard type (23) is the elementary factor of all the Jordan blocks.

This requires the Matrix \ (a\) of the Jordan standard type \ (j\), only the elementary factor of \ (j\). But we only have \ (a\), and know it is similar to \ (j\), you naturally want to ask, \ (\lambda i-a\) and \ (\lambda i-j\) of the elementary factor what is the relationship? More generally, set \ (A\sim b\), that is, the existence of a reversible matrix \ (p\), so that \ (a=pbp^{-1}\). Then there is \ (\lambda i-a=\lambda i-pbp^{-1}=p (\lambda i-b) p^{-1}\), thus \ (\lambda i-a\) and \ (\lambda i-b\) offset. This shows that the characteristic matrix of the similarity matrix is offset, and the corresponding elementary factors are the same. The above conclusion will solve the problem of the Matrix \ (a\) of the Jordan standard type, and transform it into the problem of seeking \ (\lambda i-a\) Elementary factor.

In fact, if \ (\lambda i-a\) and \ (\lambda i-b\) offset, their primary factor is the same, thus \ (a,b\) The Jordan standard type is the same, this has \ (A\sim b\). So the matrix similarity and the characteristic matrix offset are equivalent, the elementary factor is similar or offset the whole system invariants. This paper introduces a method to prove the necessity, and proves that the steps can be used to find the transition matrix. Set existence reversible \ (\lambda\)-matrix \ (P (\LAMBDA), Q (\LAMBDA) \), make \ (\lambda i-a=p (\LAMBDA) (\lambda i-b) Q (\LAMBDA) \), i.e. \ ((\lambda i-a) Q^{-1} (\LAMBDA) =p (\LAMBDA) (\lambda i-b) =\lambda P (\LAMBDA)-P (\LAMBDA) b\). According to the conclusion of the formula (1), the \ (a\) is brought into the equation (26), which proves the \ (A\sim b\), and the transition matrix is \ (P (\LAMBDA) \).

\[a=p (A) \,b\,p^{-1} (a) \tag{26}\]

   verification: The compound array \ (a\) is similar to its transpose \ (A ' \), and the transition matrix is obtained;

   the minimum polynomial of the compound array is obtained by using the Jordan standard type.

5. Standard type of solid matrix

Relatively speaking, the real matrix is more commonly used, although it does not necessarily have the Jordan standard type, but we can still get some useful conclusions. Of course, the solid matrix is only a special case of the compound array, the full use of the existing conclusions of the compound matrix will simplify many discussions. First look at two in the plural domain in the real square (a,b\), then there is a real square matrix \ (p,q\) make the following form the left side, simplify to get \ (ar=rb,aq=qb\), and then have the right to set up.

\[a= (P+iq) \,b\, (P+iq) ^{-1}\quad\leftrightarrow\quad A (p+\lambda q) = (P+\lambda q) b\tag{27}\]

Set \ (\varphi (\LAMBDA) =| P+\lambda q|\), because \ (\varphi (i) \ne 0\), therefore \ (\varphi (\LAMBDA) \) nonzero. Thus there must be a real number \ (\lambda_0\) make \ (\varphi (\lambda_0) \ne 0\), at this time \ (p+\lambda_0q\) reversible, thus the formula (28) is established. This shows that \ (a,b\) is real similar, whereas if \ (a,b\) is similar, they are of course similar, so the real similarity and complex similarity of real matrices are equivalent. This conclusion tells us that if we want to discuss the "standard" real-like matrix of square matrices, we need only find the "standard" real square matrix which is similar to Jordan's standard type.

\[a= (p+\lambda_0q) \,b\, (p+\lambda_0q) ^{-1}\tag{28}\]

We know that the real coefficient polynomial can be up to two times in the real field, so that the primary factor of the real matrix in the real field is \ ((\LAMBDA-\LAMBDA_0) ^n\) or ((\lambda^2+a\lambda+b) ^n\). For the latter, it appears in the plural field as a pair of primary factors \ ((\LAMBDA-\LAMBDA_0) ^n, (\LAMBDA-\BAR{\LAMBDA}_0) ^n\). In order to merge such elementary factors into the real domain ((\lambda^2+a\lambda+b) ^n\), we naturally consider \ ((\LAMBDA-\LAMBDA_0) ^n, (\LAMBDA-\BAR{\LAMBDA}_0) ^n\) The Jordan Block is merged, that is, the A=\begin{bmatrix}j_n (\lambda_0) &\\&j_n (\BAR{\LAMBDA}_0) \end{bmatrix}\) as the real matrix.

As shown in the formula (29), in fact, the Jordan Block has another well-structured similarity matrix, which makes the elementary transformation very convenient. It makes a discussion of the similarity matrix of the \ (a\), which is equivalent to the discussion of the similarity matrix of the \ (b=\begin{bmatrix}\lambda_0&\\&\bar{\lambda}_0\end{bmatrix}\).

\[\begin{bmatrix}1&&&\\&\lambda^{-1}&&\\&&\ddots&\\&&&\lambda^ {-(n-1)}\end{bmatrix}\begin{bmatrix}\lambda&1&&\\&\ddots&\ddots&\\&&\ddots &1\\&&&\lambda\end{bmatrix}\begin{bmatrix}1&&&\\&\lambda&&\\&& \ddots&\\&&&\lambda^{(n-1)}\end{bmatrix}=\lambda\begin{bmatrix}1&1&&\\&\ddots &\ddots&\\&&\ddots&1\\&&&1\end{bmatrix}=\lambda M_n\tag{29}\]

Set \ (\lambda_0=\rho (\cos\theta+i\sin\theta) \), \ (b\) The elementary factor is \ (\lambda^2-2\rho\cos\theta\lambda+\rho^2= (\lambda-\rho \cos\theta) ^2+ (\rho\sin\theta) ^2\), it is easy to construct it also \ (c=\begin{bmatrix}\rho\cos\theta&\rho\sin\theta\\-\rho\sin\ theta&\rho\cos\theta\end{bmatrix}\) is an elementary factor. So there is \ (B\sim c\), and then we get the (a\) similar to the real Matrix (30), and finally get the standard type of the real square.

\[\begin{bmatrix}j_n (\LAMBDA_0) &\\&j_n (\bar{\lambda}_0) \end{bmatrix}\sim\rho\begin{bmatrix}\cos\theta M _n&\sin\theta M_n\\-\sin\theta M_n&\cos\theta m_n\end{bmatrix}\tag{30}\]

"Linear algebra" 06-jordan standard type

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.