1. Feature values and feature vectors
A matrix acts on a vector. It is equivalent to a constant that acts on the vector ., The feature vector is called the corresponding feature vector, and it is not a 0 vector.
Is the feature value of A. Only when there is an extraordinary solution.
A set of linear independent feature vectors corresponding to different feature values.
Keratin
In many cases, a can perform such decomposition. D is the diagonal matrix, which facilitates the solution.
Consider if a has n linearly independent feature vectors. Where p is composed of n vectors, and D is the feature value corresponding to the diagonal line.
If there are not n different feature values, but the sum of the spatial dimensions of all feature values is N.
2. Orthogonal and least squares
Orthogonal projection, Best Approximation Theorem
Methods for constructing standard orthogonal basisGlam-Schmidt, CorrespondingQR decomposition
Least Squares
The equation Ax = B may have no solutions. We may want to obtain a solution closest to B.
Corresponds to the least square solution. X is an arbitrary vector in space.
This solution can be transformedB Projection to colaSpace solution, because this solution corresponds to the closest distance to B.
\[
A \ hat {x }=\ hat {B} \]
"Src =" http://chart.apis.google.com/chart? CHT = TX & chlorophyll = % 0d % 0a % 5c % 5B % 0d % 0aa % 5 chat % 7bx % 7D % 3d % 5 chat % 7bb % 7D % 5c % 5d % 0d % 0a ">
The example in the book shows that when column A is not linearly unrelated, there will be free variables. Therefore, the least square solution is not unique, but projection is unique.
3. symmetric matrix
The ideas introduced in the book are generally specialized. As they become special, such as symmetric matrices, they have more and better features.
The diagonal P Matrix mentioned above corresponds to the symmetric matrix, and is also an orthogonal matrix. If the orthogonal diagonal matrix must be a symmetric matrix,
In addition, the symmetric matrix ensures that it can be subject to the right corner ..
Symmetric matrices can be decomposed by spectrum, and reverse transpose
Quadratic Form
A is a symmetric matrix..
If the result of A is a diagonal Q array that does not contain cross items, for example, the cross items can be removed through coordinate transformation by using the prior symmetric matrix (considering that D is a diagonal array.
Quadratic classification, positive definite, negative definite, indefiniteAfter the Coordinate Transformation of the above spindle theorem, it actually depends entirely on the feature value.
Conditional optimization (Limit X is the quadratic Extreme Value Problem under the unit vector)
The corresponding group is considered here.Unit VectorThe variable in makes the problem of quadratic Q (x) optimization.
The conclusion is that the obtained maximum and minimum values correspond to the maximum and minimum feature values respectively, and the corresponding unit vector is the feature vector corresponding to the corresponding feature values.
Lu decomposition (equation solving, unit matrix transformation), QR decomposition (orthogonal basis, orthogonal transformation), Singular Value Decomposition (diagonal)