Study Summary of linear algebra "Linear Algebra and its application"

Source: Internet
Author: User

This article is for learning records only and includes some mathematical concepts, definitions, and summaries of personal understanding. Want to be able to share some learning content.

Section first: Row Reduction and Echelon Forms
  1. Echelon form: The matrix after the row elimination element
  2. Reduced echelon form: a matrix of row elimination and leading entry to 1.
  3. Echelon form and reduced Echelon form is row equivalent to the original form.
  4. SPAN{V1, V2, v3,...... VP} is the collection of all vectors so can be written in the form c1*v1 + c2*v2 + ... cp*vp w ITH C1, .... CP scalars.
  5. Ax = 0 have a nontrival solution if and only if the equation have at least one free variable. (not full column rank)
  6. The solution for ax = B is equal to Ax = 0 and the sum of the special solutions.
  7. P54 the process of solving the linear equation group.
  8. Linear independent means that any vector cannot be combined into one of the vectors.
  9. Ax = b:cola1 * x1 + ColA2 * x2 + .... Colam * XM = b
  10. Matrix transformations:t (x) = Ax is linear transformation.
  11. A transformation matrix is a combination of units converted from one dimension to the other. A = [T (E1) T (E2): T (en)]
  12. A mapping t:r^n, R^m is said to being onto R^m if each B in r^m are the image of at least one x in R^n. (Ax = B has a solution)
  13. A mapping T:r^n, R^m are said to being one-to-one r^m if each B in r^m are the image of at most one x in R^n.
Section II: Matrix operation
    1. Each column of AB is a linear combination of the columns of a using weightings from the corresponding columns of B. AB = A [B1 b2 b3 b4,,, bp] = [Ab1 Ab2 ... ABP]
    2. Each row of AB is a linear combination of the columns of the B using weightings from the corresponding rows of a.
    3. Warning:ab! = BA. AB = AC!=> b = C. AB = 0!=> A = 0 or B = 0
    4. Definition of inverse matrix: a-1*a = A*a-1 = E. Can be pushed to export a as a phalanx, see Exercise 23-25, Section 2.1. The necessary and sufficient condition for a reversible is a full rank (the determinant is not equal to 0).
    5. [I A-1] [i] can be obtained by doing the row elimination of [A i]
    6. All equivalent definitions for the full rank of the matrix: p129,p179.
    7. LU decomposition: A = Lu, where l is the diagonal element is 1, the lower half square, U is the upper half of the m*n matrix. L is the inverse of the flight of the transformation matrix, U is a echelon form. Calculation L does not need to calculate each transformation matrix. See P146.
    8. SubSpace, column space, null space definition.
    9. A = M*n = Rank (a) + rank (Nul (a)) = N.
    10. The dimension of a nonzero subspace H, denoted by Dim h, was the numbers of vectors in any basis for H. The dimension of the zero subspace {0} US defined to is zero.
Section III: Introduction to determinants
    1. The definition and calculation of determinant.
    2. Row elimination does not change the determinant value. Swap lines change the sign. When a row is multiplied by K, the determinant is multiplied by K.
    3. The determinant of a triangular matrix is the product of a diagonal element.
    4. Det (AB) = det (A) * DET (B).
    5. Let A is an invertible n*n matrix. For all B in R^n, the unique solutionx of Ax = B have entries given by Xi = det Ai (b)/det (A). Ai (b) means replacing line I of a with B.
    6. Can be deduced from 5 a^-1 = 1/det (a) * adj a. Adj a = [( -1) ^i+j* det (Aji)]
    7. The relationship between determinant and volume: the area or volume of parallel geometry equals |det (A) |. and det (Ap) = det (A) *det (p)
Section Fourth: Vector Spaces
    1. An indexed set {v1, v2, ... vp} of or more vectors, with VI! = 0, is linearly dependent, if and only if some VJ (w ITH J > 1) is a linear combination of the preceding vectors.
    2. Elementary row operation on a matrix does not affect the linear dependence relations among the columns of the matrix.
    3. Row operations can change the column space of a matrix.
    4. x = PB [X]b:we call PB the change-of-coordinates matrix from B to the standard basis in r^n.
    5. Let B and C is bases of a vector space v. Then there are a unique n*n matrix p_c<-b such that [x]c = P_c<-b [X]b. The columns of p_c<-b is the c-coordinate vectors of the vectors in the basis B, which is p_c<-b = [[B1]c] [b2]c ... [Bn]c]. [C B] ~ [I p_c<-b]
Section Fifth: eigenvectors and eigenvalues
    1. \ (Ax =\lambda * x\)
    2. The characteristic vectors of different eigenvalues are linearly independent.
    3. Det (A-λ*i) = 0. Because (a-λ*i) has a non-0 solution.
    4. A is similar to B if there are an invertible matrix P such that P^-1ap = B. They has same eigenvalues.
    5. The condition that the matrix can be diagonal is that there are n linearly independent eigenvectors (there are infinitely many eigenvectors, and the number of linearly independent vectors is at most n).
    6. The dimension of the feature space is less than or equal to the power of the feature root. When the dimension of the feature space equals the power of the feature root, the matrix can be diagonal.
    7. Transformation of the same coordinate transformation matrix in the spatial coordinate system of different dimensions: P328. Transformation of the same coordinate transformation matrix in different coordinate systems: P329. In fact, they are all the same.
    8. Suppose a = Pdp^-1, where D is a diagonal n*n matrix. If B is the basis to r^n formed from the columns of P, then D is the B-matrix for the transformation x->ax. When the coordinate system is converted to p, the transformation matrix corresponds to a diagonal matrix.
    9. Complex system.
    10. Iterate to find eigenvalues and eigenvectors. First estimate an extra-close eigenvalue and a vector \ (x_0\), where the largest element is 1. Then iterate through the iterative process as described in P365. The reason for the maximum eigenvalue for an iteration is as follows: Because \ ((\lambda_1) ^{-k}a^kx\rightarrow c_1v_1\), for arbitrary \ (x\), when K is approaching infinity, \ (a^kx\) will be in the same direction as the eigenvector. Although \ (\lambda\) and \ (c_1v_1\) are unknown, but because \ (ax_k\) will approach \ (\lambda*x_k\), we just make \ (x_k\) The largest element is 1, you can get \ (\lambda\).

Section Sixth: Inner Product, Length, and orthogonality
    1. \ ((Row a) ^{\bot} = Nul a\) and \ ((Col a) ^{\bot} = Nul a^{\top}\). Obviously, where \ (a^{\bot}\) represents space perpendicular to a space.
    2. An orthogonal basis for a subspace W of \ (r^n\) are a basis for W so is also a orthogonal set.
    3. the projection of a vector in a dimension: \ (\hat{y} = proj_l y = \frac{y\cdot u}{u\cdot u}u\).
    4. an set was an orthonormal set if it was an orthogonal set of unit vectors.
    5. an m*n matrix u have orthonormal columns if and only if \ (u^\top U = i\)
    6. projection of a vector in a space: \ (\hat{y} = proj_w y = \frac{y\cdot u_1}{u_1\cdot u_1}u_1 + \frac{y\cdot u_2}{u_1\cdot U_2}u _2 + ... + \frac{y{\cdot}u_p}{u_p\cdot u_p}u_p.\)
    7. How to get a bunch of vectors into orthogonal unit vectors: repeat 3.
    8. QR Decomposition: If a has linearly unrelated column vectors, then it can be decomposed into Q (orthogonal vectors) and R (the upper triangular matrix, which is the coefficients of the original coordinates in the orthogonal coordinate system) \ (Q^{\top}a=q^{\top} (QR) = IR = r\)
    9. The
    10. Least squares LSE (machine learning Base: linear fit problem in non-Bayesian conditions) is obtained by \ (A^{\top} (B-a\hat{x}) =0\) (\hat{x}= (a^\top A) ^{-1}a^{\top}b\). If a is reversible, this formula can be simplified. If QR decomposition can be done, then \ (\hat{x}=r^{-1}q^{\top}b\).
    11. The concept of the inner product of a function.
Seventh section: Diagonaliztion of symmetric Matrixs
    1. If a matrix is symmetric, the feature space corresponding to any of its two eigenvalue values is orthogonal.
    2. The matrix can be orthogonal diagonalization equivalent to it is a symmetric matrix.
    3. \ (a=pdp^{-1}\) can get PCA (machine learning algorithm principal component analysis, diagonalization of covariance matrix (symmetry))
    4. The two-time equation is transformed into a form without cross-multiplication. X=py, \ (A = pdp^{-1}\).
    5. For two functions \ (x^{\top}ax\), the |x| = 1, the maximum value is the maximum eigenvalue and the minimum value is the minimum eigenvalue. If the maximum eigenvalue (\ (x^{\top}u_1\)) is not selectable, then select second.
    6. The orthogonal matrix p probably means that in this coordinate system, the function is symmetrical, and D is the scale of the axis extension.
    7. SVD decomposition (the last content of the book contains a lot of the above) is to decompose the matrix into a form similar to pdp^-1, but not any matrix can be expressed in this form (there are n linearly independent eigenvectors, orthogonal if also symmetric matrix). where \ (a=u{\sigma}v^{\top}\), \ ({\sigma}\) is a singular value (the root of the eigenvalue of \ (a^{\top}a\), V is the corresponding eigenvector of \ (a^{\top}a\), U is \ (av\) Normalization. The vectors inside the AV are vertical. \ (u{\sigma}\) is another expression of Av.

Study Summary of linear algebra "Linear Algebra and its application"

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.