# The first chapter matrix and Gaussian elimination element method

Source: Internet
Author: User

1. The equation is solved by eliminating the element method, and the matrix is formed into the upper triangular or lower triangular matrix:

The elimination of equations with 100 equations requires 3 million steps (multiplication and subtraction), and the rounding error will be large when approaching millions of steps.

2. Determinant method (when the matrix becomes larger, the computational volume increases sharply):

The elimination method is a widely used method at present.

From the line of view, the equation angle

From the column, the vector angle

"Linear combination ax=b from the point of view of line vectors"

The surface equation Ax+by+cz = d a,b,c can have at most two to 0, the polygon has only two dimensions,

The n-dimensional equation determines the ' face ' (plane) with n-1 dimension

Line equation Ax+by = 0

Set the equation to n-dimensional, each add a same dimension of the equation (a new ' plane ', the space is n-1 dimension), the dimension of the determined space will be reduced by 1, to obtain the unique solution of the equation, at least have n equations

Not independent: Two lines parallel (coincident), both sides parallel (coincident) or equation no solution (parallel, or not in the above two cases), note that the line is one-dimensional, the surface is two-dimensional, from the lines and polygons of their own definition of space

Linear combination from a column vector perspective: A combination of column vectors (combination)

It is the starting point of the problem to start thinking about the linear combination when starting from the square. Most of the following are third-order matrices.

"Bizarre Situation":

Note that these non-solution cases correspond to the non-intersection of the geometrical representation and the solution of the equation, and we should look at the solution from two aspects.

Notice the normal vector of the plane. The normal vector does not move and the equation value changes to the right, causing the plane to shift

In two-dimensional cases, parallel lines are the only cases where there is no solution.

In three-dimensional cases, the singular situation (note is a three-dimensional case, the following is drawn from the side of the surface to see):

for (b), three-side 22 intersection, the intersection of the intersection with each other, there is no overlap, the equation is no solution, in fact, if the third plane is not parallel to the intersection of the first two faces, the result will certainly have a point of intersection, without the absence of solutions

for (c), the solution of the equation by the elimination of the 0=0 case, there is an infinite solution, these solutions are in a straight line (note that the three-dimensional case of infinite solution only this case and the three-side coincidence of the case). (b) The case in (c) occurs when the face in

From the angle of the column vector:

The case of no solution:

The three-column vector on the left belongs to the same plane, and the vector on the right is not within this plane.

The case of infinite solution:

The three-column vector on the left belongs to the same plane, and the vector on the right is within this plane.

Determine that three vectors are within the same plane: The ax=0 has a (not all 0) solution, then three vectors are in the same plane.

Summary: In the case of scaling to n-dimensional, if n ' Plane (n-1 dimension) ' (plane) has no intersection point, or there is an infinite intersection (equation angle), then the N-column vector falls within the same ' plane ', or the N-column vector only determines up to n-1 dimension space, and does not involve the nth dimension space.

If the row picture has no solution, then the Coloumn picture will naturally not have a solution, both of which represent the same problem.

U+v-w = 2 (1)

2u+3w = 5 (2)

3u+v+4w=6 (3)

3. (1) + (2) = (3) Meaning:

In the three-dimensional case, the point is infinite, only the three-column vector

For the intersection of (1) (2) (3), the point must conform to (1) (2) (3), then point must conform to (1) (2), then point must conform to (1) + (2), in fact (1) (2) The intersection should be a line, this line should be (1) + (2) represents a subset of space, because (1) + (2) = (3), the intersection of (1) (2) (3) is the line intersecting (1) and (2), so the point is an infinite

Furthermore, whether the column vectors are coplanar is not related to the right side of the equation, that is, the row picture on the left side of the equation can be determined by simply transforming it into equal form, so that the left column vector is actually coplanar, as long as the left side can be replaced by equal form, always found in accordance with (1) + (2) = (3) The equation to the right of B, so that the left column vector is coplanar. This condition is sufficient, but not necessarily necessary.

Determine the other form of coplanar (sufficient and necessary condition), so that b=0,ax=b,b=0, if there is a non-total 0 solution, indicating that the column vectors are coplanar

In the elimination operation, there is a a!=b:

In other words, the assumption that there is an intersection is not true at all ... It means that there is no intersection and you can think of it as a way to disprove it.

【！！ The intersection is only a subset of the original set, these assumptions are based on these subsets, the two equations of the formula, the difference, are set on the intersection, rather than the complete set of "

Two-sided line: (1) normal vector perpendicular to both sides (2) any point on the cross-line ... But there are linear equations in the three-dimensional coordinate system ... So the result must be expressed in two faces, an equation that cannot be expressed.

If (A, b) is a multiple of (c,d), then (A,C) must be a multiple of (B,D)

(A, B)/n= (c,d) (a,c) = (b,d)/w, it is obvious that a/n*w=b/n is c*w=d;

Gaussian elimination element method:

When you actually do it, you can remove the arguments from the back, using only the form of the Matrix.

The relationship between Gaussian elimination method and singular nature:

Unique solution: Non-singular

A pivot position appears 0, then the remaining coefficients of this column can not be calculated 0, at this time may be singular or non-singular: because the position can be reversed, the calculation can be carried out after the exchange, non-singular, otherwise it is singular.

Non-singular and singular conditions:

The corresponding row picture in these graphs:

1. On both sides of parallel, there will be two lines of elimination 0 = x case, if x=0, then two parallel lines in one, there are infinite solutions, otherwise no solution

When the row picture is transformed,

For row picture, if it has a unique solution, then add a row for the row picture, and the last line in the elimination method will appear 0=0 results.

1.4 Matrix notation and matrix multiplication

M equations and n unknowns:m*n matrices

Vector: The column vector is the middle of the book

Ax=b B is called a non-homogeneous term.

【！！！ "Two forms of interpretation of the multiplication of matrices:

This form is important, he represents the column and the x is the coefficient

: Elements of row J column I

M by n:m Row n column

X = (2,5,0) represents a column vector, but is written in the form of a row.

I Identity matrix Unit matrices

Eij Elementary Matrix Elementary matrices

E can be used as the elimination matrix in the Gaussian elimination method:

The second line of [-2 1 0] on behalf of the second line of the elimination of the result is-twice times the first row + the second row, you should need two E to complete the three-dimensional matrix elimination, because only one row can be processed at a time.

【！！ "Take a look at the results of multiplying multiple elimination matrices, and the results are witty, absolutely witty.

Can be remembered:, or:

[!!] Note that when the matrix expands to the following form:

[!!] Note that any column of AB is a combination of all columns in a (2,4) *1+ (3,0) = (17,4)

[!!] Or as each line of AB is a combination of all rows in B 2*[1 2 0]+3*[5-1 0]=[17 1 0];

Line break matrix:

Matrix multiplication Operation Law:

Binding law, distribution rate, discontent Exchange law (commutative)

A matrix of several specific shapes:

··

The product of two upper triangular matrices or the upper triangular matrix, the product of two lower triangular matrices or the lower triangular matrix

[!!!] The third method of computing The matrix:

A=[1/2 1/2;1/2], a, no matter how many times the party is their own

Flips the elements in the matrix from beginning to end:

Matrix block Multiplication:

The other three matrix multiplication calculation methods follow the same computational law, and a block can be considered as a whole to calculate. As long as the blocks are divided correctly.

1.5 triangle factor and line exchange

Upper triangular matrix: upper triangular

Lower triangular matrix: lower triangular

[!!!] The inverse problem is derived from the Gaussian elimination element method:

L The Law of calculation:

L and U seem to be the key to solving the problem ax=b:

Lux=b

First solve the a=lu, then calculate c, then calculate X

A Good elimination Code:

The uniqueness of the decomposition:

If, U ' s is the upper triangular matrix that has a unit diagonal element,

L is the lower triangular matrix with units of diagonal elements, the diagonal element of D is not 0, then l1=l2,d1=d2,

U1=U2;

For the permutation matrix P, and

Note the notation for P, and the subscript for P represents the two lines exchanged by P

The use of P is envisaged:

When a pivot is close to 0, it also needs to be newline, in order to reduce the error.

Note that in the elimination process, if there is a line break, the corresponding line of the elimination matrix needs to be changed as well.

Exchange K,r two lines of MATLAB algorithm, R under K

A ([R K],:) = A ([k R],:);% interchange a k two lines

L ([R k],1:k-1) =l ([k r],1:k-1); Exchange L of the K,r two lines, then the elimination is based on the Exchange

% line of the matrix, note that the actual exchange is A , is L with A Moving

P ([R K],:) =p ([k R],:);

Sign =-sign

A Good question:

Note that for the time being, it remains in this form during the calculation

This corresponds to the question of why the LIJ, all x position will be copied to the left 0 position, for GFE may not be this form, because it is the order of operations on the contrary, it is easy to cite the example can not be identified, for example, it is obvious that the last line is not [-1-1 1]. For example, another example:

A=ldu, if the elements of a are symmetric about the main diagonal, then the l,u is symmetrical

Tridiagonal Matrice:

[!!] The function of Lu decomposition in MATLAB: [L,u,p]=lu (M);

Cholesky decomposition: R = Chol (A); Meet:

1.6 Inverse and transpose

Inverse non-existent: when ax=0, and X is not equal to 0,a no inverse

For Ax=0, if a is reversible, then X has only 0 solutions

Definition of Inverse:

Inverse existence: When and only if the elimination method produces n non-0 pivots

The inverse of each matrix is unique

If a is reversible, then the solution of the ax=b is unique:

Therefore, there is a inverse corresponding to the matrix of non-singular, non-inverse matrix singular

The determinant can also determine whether the matrix is reversible:

Matlab test matrix is reversible by finding N non-0 pivots method

The necessary and sufficient conditions for reversible diagonal matrices: None of the diagonal elements are 0

A=lu this:

Gauss-jordan method to find the inverse:

The Right inv (L) is actually (assuming, in the simplest case, no line Exchange)

U::

If it can be found, then the ax=b is a one-step solution, but the author recommends using the form of A=lu:

Note: When a must exchange rows to achieve the purpose of getting u, according to the rules in the previous section, we get the

The Pa=lu,gauss-jordan method does not take this form (that is, it takes the form, the two are not the same), and when the line is exchanged directly the entire line is exchanged, the result is correct, exactly.

Reversible or not: This book will look at independence, column independence, non-0 determinant, non-zero eigenvalues, and other aspects of the relationship with transpose.

A 1-sided inverse of a square matrix is automatically a 2-sided inverse.

1-sided inverse:a Left-inverse or a right-inverse

In fact, the Gauss-jordan method obtains the left inverse of a:

P64

Because the various operations of the matrix are multiplied on the left side of a

Proof of left inverse equals right inverse p60

Reversible there is a proof of n non-0 pivots P64

Transpose matrix:

Proof See P65

Symmetric matrices:

, it must be a phalanx

The symmetric matrix is not necessarily reversible, and if reversible, its inverse must be symmetrical.

That is, the three-type must be a symmetric matrix on any matrix.

Is the symmetric matrix is the oblique symmetric matrix

are symmetric matrices, but they are not usually equal

Proof P66

For permutation matrices,. can find

【！！ "That means that ab=ac,b and C are not necessarily equal, and that inequality is pervasive, not only in a few cases."

But note that AB inversion does not mean A or B is reversible.

If the matrix of a column of all zeros, then the matrix must be irreversible (from the angle to see), if a pivots is 0, then this column must be transformed into the form of 0, it is irreversible.

【？？ ", (a) (c) should be correct

Skew-symmetric:

In fact, this nature of reversibility has been emphasizing uniqueness. The inverse of a matrix must be unique, and it is actually another form of application of non-singularity.

Ternary upper triangular matrix inversion is also very simple AH:

If a is reversible, it is also reversible:

[??] If reversible, then a reversible? (is based on the inverse of a, from the above, if a reversible, it is also reversible)

[!!] In MATLAB, \ is used to deal with the problem of Ax=b, is actually B when the numerator, a when the denominator, so use left to divide \

Transpose of the Block matrix:,

: (a) (c)

The inverse of the lower triangular matrix is his own, the lower triangular matrix is multiplied by the triangular matrix or the lower triangular matrix

The inverse of a symmetric matrix is a symmetric matrix, and the result of a symmetric matrix is not necessarily a symmetric matrix.

The inverse or diagonal matrix of a diagonal matrix, the multiplication of two diagonal matrices or the diagonal matrix

The transpose of a matrix cannot be done only by wrapping or changing columns

1.7 Special matrices and applications

This paper mainly describes the operation of triangular symmetric matrix in the elimination of the element method can be greatly simplified, using the example of differential equation.

The first chapter matrix and Gaussian elimination element method

Related Keywords:

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

## A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

• #### Sales Support

1 on 1 presale consultation

• #### After-Sales Support

24/7 Technical Support 6 Free Tickets per Quarter Faster Response

• Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.