"Linear algebra" 07-linear function

Source: Internet
Author: User

1. Linear function 1.1 \ (k\) heavy linear function

The linear space in the sense of pure algebra is discussed earlier, and we often need to deal with the measurement of vectors in actual scenarios. A measure generally behaves as a vector, for example, a determinant can be thought of as a function of \ (n\) a row (column) vector, and each element of the matrix product is actually a function of a row vector and a column vector. Strictly speaking, the linear space on the domain \ (f\) \ (v\), mapping \ (V\times\cdots\times v\mapsto f\) (\ (k\) \ (v\)) is called the \ (v\) meta-function on the linear Space \ (k\), generally written as \ (F ( \xi_i,\cdots,\xi_k) \).

If the function satisfies the linear equation (1) on each variable \ (\xi_i\), it is also called the \ (k\) heavy linear function on \ (v\). By definition it is easy to know that if a set of base (V\) is selected, \ (k\) The heavy linear function can be taken by \ (\xi_1,\cdots,\xi_k\) to each of the set of bases uniquely determined. In particular, the \ (k\) heavy linear function on the \ (n\) dimensional linear space is fully determined by the \ (n^k\) independent variable. All \ (k\) heavy linear functions can make up a linear space on \ (f\), strictly defined you can give yourself.

\[f (\cdots,\xi_{i-1},k_1\alpha+k_2\beta,\xi_{i+1},\cdots) =k_1f (\cdots,\xi_{i-1},\alpha,\xi_{i+1},\cdots) +k_2f ( \cdots,\xi_{i-1},\beta,\xi_{i+1},\cdots) \tag{1}\]

The above-mentioned determinant and row-and-column vector multiplication are obviously linear functions, observing these two examples, we find that the linear function also has a property to continue to discuss, that is, the variable \ (\xi_i,\xi_j\) position of the exchange of the function value of the effect. Of course, we only discuss the most typical case, for any vector, formula (2) constant function is called symmetric linear function , while the formula (3) is called the linear function of the constant, both of these cases are more common. It is easy to prove that the order of symmetric linear function variables can be arbitrarily changed without affecting the value of the function.

\[f (\cdots,\xi_i,\cdots,\xi_j,\cdots) =f (\cdots,\xi_j,\cdots,\xi_i,\cdots) \tag{2}\]

\[f (\cdots,\xi_i,\cdots,\xi_j,\cdots) =-f (\cdots,\xi_j,\cdots,\xi_i,\cdots) \tag{3}\]

In an inverse linear function, if \ (\xi_i=\xi_j\), there is \ (f (\cdots,\xi_i,\cdots,\xi_j,\cdots) =0\), and then a multiple of a variable is added to another variable, and the value of the function is not changed. You can also know that if \ (\xi_1,\cdots,\xi_k\) is linearly related, there is \ (f (\xi_1,\cdots,\xi_k) =0\). This nature reminds us of the basic nature of the determinant, in fact when selected \ (v\) a set of base \ (\varepsilon_1,\cdots,\varepsilon_n\), and set \ (\xi_i=a_{i1}\varepsilon_1+\cdots+a _{in}\varepsilon_n\), the formula (4) can be obtained by defining the expand \ (n\) symmetric inverse linear function \ (f (\xi_1,\cdots,\xi_n) \).

\[f (\xi_1,\cdots,\xi_n) =\begin{vmatrix}a_{11}&\cdots&a_{1n}\\\vdots&\ddots&\vdots\\a_{n1}& \cdots&a_{nn}\end{vmatrix}f (\varepsilon_1,\cdots,\varepsilon_n) \tag{4}\]

If \ (f (\varepsilon_1,\cdots,\varepsilon_n) \ne 0\), this set of bases can be selected appropriately so that \ (f (\varepsilon_1,\cdots,\varepsilon_n) =1\). It is known from the formula (3) that the linear function is the determinant of the variable coordinates at this time (n\), and in some textbooks it is defined as a determinant.

   verification: when \ (\xi_i=0\), \ (f (\cdots,\xi_i,\cdots) =0\);

   Verification: The properties of \ (\xi_i=\xi_j\) in an inverse linear function can be reversed to its definition, which is equivalent.

1.2 linear function and dual space

In the case of no confusion, the \ (1\) Heavy linear function is also abbreviated as a linear function , which is actually a linear transformation \ (F:v\mapsto f\). As we already know from the previous discussion, the linear function \ (f (\alpha) \) is determined entirely by the image of the base in \ (v\), in particular, the linear function of the \ (n\) dimensional linear space (base \ (\{\alpha_1,\cdots,\alpha_n\}\)) can be represented by the following formula.

\[f (\alpha) =k_1f (\alpha_1) +\cdots+k_nf (\alpha_n), \quad \alpha=k_1\alpha_1+\cdots+k_n\alpha_n\tag{5}\]

That is, the vector \ ((f (\alpha_1), \cdots,f (\alpha_n)) only determines a linear function, so all linear functions on the \ (v\to f\) \ (\text{hom}\, (v,f) \) Form a \ (n\) dimensional linear space \ ( v^*\). So there's \ (V\cong v^*\), and it's natural to think that \ (\alpha_i\) corresponds to the linear function \ (f_i\) determined by \ ((0,\cdots,1,\cdots,0) \). Strictly defined as formula (6), where \ (\delta_{ij}\) is called the Kronecker notation , can prove that \ (f_1,\cdots,f_n\) is a group of \ (v^*\) of the base.

\[f_i (\alpha_j) =\delta_{ij}=\begin{cases}1& (i=j) \\0& (I\ne j) \end{cases}\tag{6}\]

Although the above mappings look natural, they depend on the choice of base \ (\vec{\alpha}= (\alpha_1,\cdots,\alpha_n) \). In other words, if you switch to another set of base \ (\vec{\beta}= (\beta_1,\cdots,\beta_n) \), the mapping relationship will also change. For example, set \ (\vec{\alpha}\) to \ (\vec{\beta}\) the transition matrix is \ (a\), then there is \ (a\vec{\alpha}=\vec{\beta}\), or \ (\vec{\alpha}=a^{-1}\vec{\ beta}\). Reset \ (\vec{\alpha}\) maps to \ (\vec{f}= (f_1,\cdots,f_n) \) as defined above, and \ (\vec{\beta}\) maps to \ (\vec{g}= (g_1,\cdots,g_n) ' \), below to calculate \ (\vec{f}\) to \ (\vec{g}\) of the Transition matrix \ (b\).

Requires the transition matrix \ (b\), is the \ (f_i\) linear representation \ (g_j\), according to the nature of the formula (6) is to determine \ (B_{ji}=g_j (\alpha_i) \). Set \ (a^{-1}\) element is \ (a_{ij}\), then there is \ (\alpha_i=\sum\limits_k{a_{ik}\beta_k}\), bring in \ (b_{ji}\) and according to \ (g_j\) the definition of \ (b_{ji}=a_{ij }\)。 This has a \ (B ' =a^{-1}\), which is a formula (7), which verifies that the mapping defined by the equation (6) is different under different bases.

\[b= (A^{-1}) ' \tag{7}\]

But this formula suggests that if we define the linear function \ (V^*\mapsto f\) as well, we get another linear space \ (v^{**}=\text{hom}\, (v^*,f) \). At this point \ (v^{**}\) changes with the base of the transition matrix \ (c= (b^{-1}) =a\), that is, regardless of \ (\vec{\alpha}\) How to choose, Mapping \ (V\mapsto v^{**}:\alpha\to f\to\alpha^{**} \) is always the same. So that we can completely overlap \ (v,v^{**}\) and think of the same space, and \ (V^*\mapsto v^{**}\) is equivalent to the definition of mapping \ (V^*\mapsto v\), which is "symmetric" with \ (V\mapsto v^*\). Under this kind of mapping, \ (v,v^*\) is called dual space , it is easy to verify that the dual space has the formula (8).

\[\alpha (f) =f (\alpha) \tag{8}\]

2. bilinear function 2.1 bilinear function

\ (2\) The linear function is also called bilinear function, and the bilinear function behaves as a metric operation of two vectors. Under the new measurement, the relationship of space vectors presents new forms and limits, which makes the spatial structure become more stereoscopic. We know from the previous discussion that if \ (\varepsilon_1,\cdots,\varepsilon_n\) is a set of bases for \ (n\) dimension space \ (v\), then the bilinear function \ (f (\alpha,\beta) \) is a value of \ (n^2\) \ (f (\ Varepsilon_i,\varepsilon_j) \) only OK. And it is easy to know that if the coordinates of the vector \ (\alpha,\beta\) are \ (\tilde{x}= (x_1,\cdots,x_n), \tilde{y}= (y_1,\cdots,y_n) \), then the formula (9) is established.

\[f (\alpha,\beta) =\tilde{x}a\tilde{y} ', \quad a_{ij}=f (\varepsilon_i,\varepsilon_j) \tag{9}\]

After the base of the selected space, the bilinear function is represented by the matrix \ (a\) in the formula (9), whereas in this group, the arbitrary \ (n\) Order matrix \ (a\) determines a bilinear function according to the formula (9). So at the fixed base, the bilinear function of \ (n\) and \ (n\) dimensional space is one by one, and this matrix is also called the measure matrix of the function. The symmetric bilinear function and the inverse bilinear function satisfy (10) (11) respectively, and their matrices are symmetric matrices and opposing called matrices.

\[f (\alpha,\beta) =f (\beta,\alpha) \quad\rightarrow\quad a=a ' \tag{10}\]

\[f (\alpha,\beta) =-f (\beta,\alpha) \quad\rightarrow\quad a=-a ' \tag{11}\]

The study of bilinear functions in finite dimensional space can be transformed into a study of metric matrices. So what exactly do we need to study? The first thought is, what is the nature of the matrix itself as a bilinear function? bilinear functions make new relationships between vectors, and we need to know what the automorphism classes of linear spaces are (i.e., the classification of bilinear functions) in such a relationship. The most important thing to discuss is what is the impact of this relationship on the spatial structure?

2.2 Rank of bilinear function

The bilinear function consists of two linear functions, which are fixed \ (\alpha\) or \ (\beta\), and \ (f (\alpha,\beta) \) are changed into linear functions (\alpha_l (\beta) \) and \ (\beta_r (\alpha) \) respectively. When \ (\alpha\) is taken (v\), the set of all \ (\alpha_l\) is recorded as \ (w_l\). In the finite dimension space, the coefficients of \ (\alpha_l\) are \alpha\ (a\), respectively, that \ (i\) bit is \ (1\), and the other bits are \ (0\). This shows that \ (w_l\) is a linear space composed of these linear functions, the dimension is the rank of the matrix, the \ (\beta_r\) composed of \ (W_r\) also has the same conclusion. The rank of the metric matrix is also called the matrix rank of the bilinear function, recorded as \ (\text{rank}_m\,f\), thus having the formula (12).

\[\dim{(w_l)}=\dim{(w_r)}=\text{rank}_m\,f\tag{12}\]

The set of vectors that make \ (\alpha_l=0\) or \ (\beta_l=0\) are called Zogen and the right root of the bilinear space, and are recorded as \ (\text{rad}_l\,f\) and \ (\text{rad}_r\ , f\). It is easy to prove that both the Zogen and the right root are linear spaces, and according to the formula (9), they are the solution sets of the equations (\tilde{x}a=0\) and \ (A\tilde{y} ' =0\) respectively. If the \ (n\) dimension matrix \ (a\) has a rank of \ (r\), then the dimension of the left and right root is \ (n-r\), and the formula (13) is established.

\[\dim{(\text{rad}_l\,f)}=\dim{(\text{rad}_r\,f)}=\dim{v}-\text{rank}_m\,f\tag{13}\]

When the left and right root is \ (0\), the \ (\alpha_l\ne\beta_l,\alpha_r\ne\beta_r\) is represented if \ (\alpha\ne\beta\). In the finite dimension space, the \ (a\) is full-rank, and the formula (12) known \ (\alpha_l,\beta_r\) traverses all the \ (n\) Order linear functions respectively, that is, the formula (14) is established. Such bilinear functions are called non-degenerate , otherwise called degenerate .

\[w_l=w_r=v^*\tag{14}\]

We naturally have another question: what is the relationship between the degenerate bilinear function, \ (w_l,w_r\)? When do they coincide? Set \ (W=w_l\cup w_r\) is also known as the rank space of bilinear functions, and its rank is called the rank of bilinear function, which is recorded as \ (\text{rank}\,f\). There is obviously \ (\text{rank}_m\,f\leqslant\text{rank}\,f\), and when and only when \ (W_l=w_r\) equals is established, it is easy to verify that the symmetric and opposing called bilinear functions satisfy the condition.

2.3 Contract Matrix

Now we will continue to study the structure of bilinear function in finite dimensional space, and the classification problem. If vector \ (\alpha,\beta\) is present, the bilinear function \ (f,g\) satisfies \ (f (\alpha,\beta) \ne g (\alpha,\beta) \), they are of course "different" bilinear functions. However, it is conceivable that if the following two-shot mapping \ (\varphi\), then \ (f,g\) is obviously "isomorphic", they can be regarded as the same bilinear function, also known as the contract . A set of bases selected in a finite dimensional space, if the \ (f,g\) metric matrix \ (a,b\) is not the same, they are naturally different, but under what circumstances will they be contracts? Of course, homogeneous bilinear functions form an equivalence class, and the answer to the question is precisely the classification of bilinear functions.

\[\varphi:\:v\mapsto V,\quad F (\alpha,\beta) =g (\varphi (\alpha), \varphi (\beta)) \tag{15}\]

In a finite dimensional space, if \ (f,g\) is a contract, take a set of bases \ (\tilde{\alpha}=\{\alpha_1,\cdots,\alpha_n\}\) and set their image as \ (\tilde{\beta}=\{\beta_1,\ cdots,\beta_n\}\). is known by the formula (15), \ (f\) in \ (\tilde{\alpha}\) the metric matrix and \ (g\) in \ (\tilde{\beta}\) The metric matrix is the same. Remember this matrix \ (a\), and set \ (g\) in \ (\tilde{\alpha}\) the metric matrix is \ (b\). Set \ (\tilde{\alpha}\) to \ (\tilde{\beta}\) the transition matrix is \ (p\), with the formula (9) will know \ (g\) in \ (\tilde{\alpha}\) the metric matrix is \ (PAP ' \), so there is a formula (16) is established.

\[a\cong b\quad\leftrightarrow\quad b=pap ', \quad | P|\ne 0\tag{16}\]

The square (a,b\) that satisfies the formula (16) is also known as the contract matrix . If \ (f,g\) the metric matrix under \ (\tilde{\alpha}\) is the contract matrix, then the double-shot with \ (p\) as the transition matrix obviously satisfies the formula (15), so \ (f,g\) is the contract. This shows that the necessary and sufficient condition for a bilinear function contract in a finite dimensional space is that the metric matrix under the same set of bases is a contract. It is obvious that the rank of the contract matrix and the rank of the bilinear function are their invariants.

To classify bilinear functions is to classify the contract matrix, so we need to find a "standard type" to classify it as the basis. Referring to the "standard" of similar transformations, it often has a simple form, and can be divided into small "irrelevant" subspace. In linear transformation, the invariant subspace is used as the "irrelevant" segmentation basis, so what parameters can be used as the basis of the metric matrix? The intuition tells us that the relationship of \ (\alpha,\beta\) determined by \ (f (\alpha,\beta) \) (0\) condition will be a good "irrelevant" reference standard.

2.4 Orthogonal vectors

But before that, there is one more detail to deal with, and that is possible \ (f (\alpha,\beta) \ne f (\beta,\alpha) \). And the general "irrelevant" is exchangeable, for which we want to give \ (f\) a restrictive condition, that is \ (f (\alpha,\beta) =0\) when and only if \ (f (\beta,\alpha) =0\). In this limitation, the definition satisfies \ (f (\alpha,\beta) =0\) the vector \ (\alpha,\beta\) is orthogonal and is recorded as \ (\alpha\perp\beta\). It is easy to prove that the vectors that are orthogonal to each element of \ (\alpha\) or set \ (w\) constitute a subspace, called their orthogonal complement , which is recorded as \ (\alpha^{\perp},w^{\perp}\).

At this point the left and right roots of the bilinear function are the same (for \ (v^{\perp}\)), which is also known as the root of the bilinear function and is recorded as \ (\text{rad}\,f\). For any subspace \ (w\), \ (f\) on its root can also be précis-writers to \ (\text{rad}\,w\), easy to verify that there is a formula (17) is established. In the finite dimension space, if \ (f\) is non-degenerate, it is easy to set up a formula (18) by using the theory of linear equation group root. If \ (f|_w\) is also non-degenerate, the combination (17) (18) will have (19) Form.

\[\text{rad}\,w=w\cap W^{\perp}\tag{17}\]

\[\dim{w}+\dim{w^{\perp}}=\dim{v},\quad (W^{\perp}) ^{\perp}=w\tag{18}\]

\[w\oplus W^{\perp}=v\tag{19}\]

3. Direct and decomposition of the standard 3.1 functions of bilinear functions

Now let's look at the nature of the bilinear function that satisfies the restriction, and examine the arbitrary vector \ (\varepsilon=f (\alpha,\gamma) \beta-f (\beta,\gamma) \alpha\), obviously there \ (f (\varepsilon,\ Gamma) (=0\). According to the restriction there is \ (f (\gamma,\varepsilon) =0\), that is, the formula (20) is established, and when \ (\alpha=\gamma\) There is a formula (21) is established. (21) It is seen that if \ (f (\alpha,\alpha) \ne 0\), the bilinear function \ (\alpha\) can be exchanged with any vector. Conversely, if \ (\alpha,\beta\) is not exchangeable, then \ (f (\alpha,\alpha) =f (\beta,\beta) =0\).

\[f (\alpha,\gamma) F (\gamma,\beta)-F (\gamma,\alpha) F (\beta,\gamma) =0\tag{20}\]

\[f (\alpha,\alpha) (f (\alpha,\beta)-F (\beta,\alpha)) =0\tag{21}\]

Assume that both \ (f (\gamma,\gamma) \ne 0\) exist, and that \ (f (\alpha,\beta) \ne f (\beta,\alpha) \) exists. by \ (\gamma\) with any vector exchangeable and formula (20), we know that \ (f (\gamma,\alpha) =f (\gamma,\beta) =0\) is established. Again by \ (\alpha,\beta\) non-exchangeable easy derivation of \ (f (\alpha,\beta+\gamma) \ne f (\beta+\gamma,\alpha) \), by the formula (21) know \ (f (\beta+\gamma,\beta+\ Gamma) (=0\). But expand the known \ (f (\beta+\gamma,\beta+\gamma) =f (\gamma,\gamma) \ne 0\), to derive contradictions.

The above contradiction indicates either \ (f (\gamma,\gamma) =0\) Heng set up, either \ (f (\alpha,\beta) =f (\beta,\alpha) \) constant. The former takes \ (\gamma=\alpha+\beta\) can get (f (\alpha,\beta) =-f (\beta,\alpha) \), so it is easy to know that the condition and \ (f\) is the anti-call function equivalence. The latter explanation \ (f\) is a symmetric function, combined with these two points, a bilinear function satisfying the orthogonal limit (\ (f (\alpha,\beta) =0\), if and only if \ (f (\beta,\alpha) =0\), is either a symmetric function or an opposing function.

So we can find out the conclusion of the orthogonal relation of the vector simply by studying the symmetric (opposing) bilinear function. But does the study of these two kinds of functions benefit our classification of bilinear functions? The answer is yes. In fact, just like the formula (22) to take two bilinear functions, easy to know \ (g,h\) are symmetric, opposing called bilinear function, and \ (f=g+h\). In addition, from (22) in turn to solve the equation group, but also know \ (g,h\) is unique. Thus the conclusion is that any bilinear function can be uniquely decomposed into a symmetric bilinear function and an opposing called bilinear function straight and.

\[g (\alpha,\beta) =\frac{1}{2} (f (\alpha,\beta) +f (\beta,\alpha)), \quad H (\alpha,\beta) =\frac{1}{2} (f (\alpha,\beta )-F (\beta,\alpha)) \tag{22}\]

It is easy to verify that all symmetric, opposing, bilinear functions are a subspace, which we remember as \ (S_2 (v), a_2 (v) \), and the entire bilinear function space is \ (t_2 (v) \). The above conclusion shows \ (T_2 (v) =s_2 (v) +a_2 (v) \), when the eigenvalues of the domain are not \ (2\), there is a \ (s_2 (v) \cap a_2 (v) =0\), so that the formula (23) is established. When the eigenvalues of a domain are not \ (2\), the classification problem of the double symmetric linear function is converted to the classification problem of \ (S_2 (v), a_2 (v) \). If no special instructions are given, the domains discussed in the future will meet \ (\text{char}\,f\ne 2\). When \ (\text{char}\,f=2\), the standard type of bilinear function fused two types of standard format, then please self-argument.

\[t_2 (v) =s_2 (v) \oplus a_2 (v), \quad (\text{char}\,f\ne 2) \tag{23}\]

3.2 Contract standard type of symmetric matrix

First of all, to see if symmetric bilinear functions have the "standard type" we want in a finite dimensional space, the main tool used is, of course, orthogonal. If all vectors have a \ (f (\alpha,\alpha) =0\), then it is a trivial 0 function, without much discussion. Otherwise arbitrary fetch \ (f (\alpha,\alpha) \ne 0\), and remember \ (w=\left<\alpha\right>\). for arbitrary vectors \ (\beta\), set \ (\beta = K\alpha+\gamma, (\gamma\in w^{\perp}) \), where \ (k\) is the undetermined factor. According to \ (f (\alpha,\gamma) =0\) can get the unique solution of \ (K\), that is, the space has the following straight and decomposition.

\[v=w\oplus W^{\perp},\quad (w=\left<\alpha\right>) \tag{24}\]

Fetch \ (w^{\perp}\) a set of bases, and a set of bases with \ (\alpha\) composition \ (v\). The above analysis tells us that the metric matrices of bilinear functions under this set of bases have the form \ (\begin{bmatrix}f (\alpha,\alpha) &0\\0&b\end{bmatrix}\). It is concluded from inductive method that the metric matrix of symmetric bilinear function in a set of bases is a diagonal matrix, or that any symmetric matrix \ (a\) is contracted to a diagonal matrix (equation (25)), and this diagonal matrix is called the contract standard type of \ (a\).

\[a=a ' \quad\leftrightarrow\quad PAP ' =\text{diag}\,\{d_1,\cdots,d_r,0,\cdots,0\},\quad (| P|\ne 0,\:r=\text{rank}\,a) \tag{25}\]

It is important to note that the contract standard type of the symmetric matrix is not unique, it relies on the selection of the base. For each diagonal element there is \ (d_i=f (\alpha_i,\alpha_i) \), and when \ (\alpha_i\) takes its own \ (k\) times, it gets a different diagonal element \ (k^2d_i\). But this reminds us that further conclusions can be drawn in the real and complex fields. For example, in the real field fetch \ (\dfrac{\alpha_i}{\sqrt{|d_i|}} \), and the Contract standard (26) of the real-domain symmetric matrix is obtained after proper adjustment of the order. In the plural field, take \ (\dfrac{\alpha_i}{\sqrt{d_i}}\), the Contract standard type (27) of the symmetric matrix of complex domain is obtained.

\[a=a ' \quad\leftrightarrow\quad PAP ' =\begin{bmatrix}i_p&&\\&-i_q&\\&&0_{n-p-q}\end{ Bmatrix},\quad (p,a\in\bbb{r}_{n\times n}) \tag{26}\]

\[a=a ' \quad\leftrightarrow\quad PAP ' =\begin{bmatrix}i_r&\\&0_{n-r}\end{bmatrix},\quad (P,A\in\Bbb{C}_{n\ Times n}) \tag{27}\]

The contract standard type of complex symmetric matrix is obviously unique, \ (r\) is its whole system invariant, the following is to discuss the contract standard type of real symmetric matrix. By the formula (26), Space \ (v\) can have a straight and decomposition \ (V=v^+\oplus v^-\oplus v^{\perp}\), where \ (v^+,v^-\) is corresponding to the \ (p,q\) dimension of the sub-space. If there is another decomposition \ (V=v_0^+\oplus v_0^-\oplus v^{\perp}\), the positive and negative considerations (V_0^+\cap V^-\oplus v^{\perp}=0\), thus \ (\dim{(v_0^+)}\ leqslant\dim{(v^+)}\). Also available \ (\dim{(v^+)}\leqslant\dim{(v_0^+)}\), so \ (\dim{(v_0^+)}=\dim{(v^+)}\), that is, the formula (26) in the \ (p,q\) is unique, \ (p,q\) is the contract standard type of the whole system invariants, this conclusion is also called inertia .

If \ (q=0\) is easy to know \ (f\geqslant 0\) constant formation, such a bilinear function or matrix is called semi-positive definite . specifically when \ (p=n\), \ (f (\alpha,\alpha) =0\) when and only if \ (\alpha=0\), such a bilinear function or matrix is called positive definite . It is similarly possible to define semi-negative and negative , obviously \ (v^+,v^-\) are positive definite and negative respectively, for this \ (p,q\) is also called positive (negative) inertial index respectively.

3.3 Standard type of contract against called Matrix

In an opposing bilinear function, if there is a constant \ (f (\alpha,\beta) =0\), then it is a trivial 0 function. Suppose there is \ (f (\alpha_1,\alpha_2) \ne 0\), and a multiple of the appropriate adjustment vector can make the function value \ (1\). First, \alpha_1,\alpha_2\ is linearly independent, which makes it impossible for us to take the orthogonal complement of \ (\left<\alpha_1\right>\) as in symmetric bilinear functions, but only consider \ (W=\left<\alpha _1,\alpha_2\right>\).

for arbitrary \ (\beta\), set \ (\beta = K_1\alpha_1+k_2\alpha_2+\gamma, (\gamma\in w^{\perp}) \), by \ (f (\alpha_i,\gamma) =0\) can get \ (k_i \), thus having the straight and decomposition of the formula (28). Similar to the analysis of symmetric bilinear functions, it is known that in a group of bases, the metric matrix of the anti-bilinear function is in the form of the formula (29), which is also called the contract standard type of the object called Matrix. It is also easy to see from the formula (29) that the rank of the opposing called Matrix must be even.

\[v=w\oplus W^{\perp},\quad (w=\left<\alpha_1,\alpha_2\right>) \tag{28}\]

\[a=-a ' \quad\leftrightarrow\quad PAP ' =\text{diag}\,\{d,\cdots,d,0,\cdots,0\},\quad (| P|\ne 0,d=\begin{bmatrix}0&1\\-1&0\end{bmatrix}) \tag{29}\]

"Linear algebra" 07-linear function

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.