A few months ago, I happened to see Meng's blog, and after reading it, unlike a deeper understanding of the matrix. Now let's summarize and share.
"According to current international standards, linear algebra is expressed by axiomatic, it is the second generation mathematical model, ..., which brings about teaching difficulties." "In fact, when we started to learn linear algebra, unknowingly entered the" second-generation Mathematical model "category, which means that the mathematical expression and abstraction has a comprehensive evolution, for the childhood has been in the" first generation of mathematical model ", that is, practical-oriented, specific mathematical models to learn from us, It would be strange to be so paradigm without and clearly informing the shift. The most familiar space of our people, there is no doubt that we live in it (according to Newton's absolute space-time view) of the three-dimensional space, mathematically speaking, this is a three-dimensional Euclidean space, we first no matter how much, first look at the familiar with such a space some of the most basic characteristics. Think about it and we'll know that this three-dimensional space: 1. Consists of many (actually infinitely multiple) positional points; 2. There is a relative relationship between these points; 3. You can define length and angle in space, 4. This space can accommodate motion, where we call movement from one point to another, rather than as a "continuous" movement in the sense of calculus, the most crucial of these properties is the 4th article. 1th, 2 can only be said to be the basis of space, not a special nature of space, all the discussion of mathematical problems, must have a set, most of the set to define some structure (relationship), not to say that with these even space. And the 3rd is too special, other space does not need to have, more is not the key nature. Only the 4th is the essence of space, that is to say, accommodating motion is the essential characteristic of space. Recognizing these, we can extend our understanding of three-dimensional space to other spaces. In fact, no matter what space it is, it must accommodate and support the rule-compliant movement (transformation) in which it occurs. You will find that in some space there is often a corresponding transformation, such as topological transformations in the topological space, linear transformations in the linear space, affine transformation in the affine space, in fact, these transformations are only the permissible form of motion in the corresponding space. So as long as it is known that "space" is a set of objects that hold motion, the transformation stipulates the motion of the corresponding space. The motion in a linear space is called a linear transformation. In other words, you move from one point in a linear space to any other point, and you can do it through a linear change. So how does a linear transformation represent it. Interestingly, in a linear space, when you select a group of bases, you can use a vector to describe any object in the space, and you can use a matrix to describe any movement (transformation) in that space. The way to make a corresponding motion of an object is to multiply the matrix that represents that motion by multiplying the vector representing that object. In short, after selecting a base in a linear space, the vector portrays the object,The matrix depicts the motion of the object and uses the multiplication of matrix and vector to exert motion. Yes, the essence of matrices is the description of motion. If someone asks you what the matrix is, then you can tell him loudly that the essence of the matrix is the description of the motion.
The matrix is a description of the linear transformation in the linear space. In a linear space, as long as we select a set of bases, then for any linear transformation, we can use a definite matrix to describe. "The key to understanding this is to distinguish between a" linear transformation "and" a description of a linear transformation. " One is the object, the other is the expression of that object. Just as we are familiar with object-oriented programming, an object can have multiple references, each of which can be called a different name, but is the same object. If the image is not, then simply a very vulgar analogy. For example, if you have a pig and you want to take a picture of it, you can take a picture of the pig as long as you have a shot position selected for the camera. This picture can be seen as a description of the pig, but only a one-sided description, because a camera position for the pig to take a picture, can get a different photo, but also the pig's another one-sided description. All these pictures are the same pig's description, but not the pig itself. Similarly, for a linear transformation, as long as you select a group of bases, you can find a matrix to describe the linear transformation. To change a group of bases, we get a different matrix. All of these matrices are descriptions of this same linear transformation, but they are not linear transformations themselves. But that's the problem. If you give me two pictures of pigs, how do I know they are the same pig in these two pictures? Similarly, you give me two matrices, how do I know that the two matrices are described by the same linear transformation? If it is the same linear transformation of the different matrix description, that is the Clan brothers, meet do not know, not become a joke. Fortunately, we can find a property of the same linear transformation of the Matrix Brothers, that is: if the matrix A and B are two different descriptions of the same linear transformation (the reason for the difference is that a different base is selected, a different coordinate system is selected), then a nonsingular matrix P must be found so that a, b to satisfy such a relationship: A = p-1bp linear algebra A bit more familiar to the reader to see, this is the definition of a similar matrix. Yes, the so-called similarity matrix is a different description matrix of the same linear transformation. According to this definition, photographs of different angles of the same pig can also be similar photographs. A bit vulgar, but it can make people understand. The matrix p in the above equation is actually a transformation relationship between the bases on which a matrix is based and the base of the B matrix. This conclusion can be proved in a very intuitive way (rather than the formal proof of the general textbook), and if there is time, I will add it later in my blog. The discovery is too important. The original family of similar matrices is a description of the same linear transformation. No wonder it's so important. In the course of engineering postgraduates, there are some courses such as Matrix theory and matrix analysis, which talk about a variety of similar transformations, such as what similar standard type, diagonalization and so on, all require the transformationThat matrix is similar to the previous matrix, why is it so required. Because only in this way can we guarantee that the two matrices before and after the transformation describe the same linear transformation. Of course, the different matrix description of the same linear transformation is not a good ring in terms of actual operational properties. Some of the descriptive matrices are much better than the other matrix properties. This is easy to understand, the same pig photos also have beautiful and ugly points. So the similarity transformation of matrices can transform an ugly matrix into a more beautiful matrix, and ensure that both matrices describe the same linear transformation.
See here, want to write down a little thing, 1, to learn mathematics to pay attention to the system when the axiom of learning methods. 2, with the motion image of the understanding matrix. 3, the similarity matrix is a family of the same transformation based on the description of the different base, that is, the description of the motion is the same, but based on a different angle (think of the substrate is not "dimension", can be understood as a point bar).
See a friend of the replies, the character of the ball. Have a good experience.
"At the end of my opinion, I'll make a fuss:
I think there are two math classes in graduate school that must be learned (in addition to the required numerical analysis and probability and mathematical statistics): One is functional analysis and the other is matrix theory.
The importance of matrix theory the working time can be realized slowly, but we generally do not understand the functional analysis, so it is difficult to recognize its importance. In fact, functional analysis, although abstract, is difficult to apply directly to work, but it can help us to have a more essential understanding of many problems. For two examples: when it comes to sampling, everyone's first reaction must be a word "twice Times" (Sampling theorem). If you learn more solidly, you may explain why you are twice times more clear. But my understanding of sampling is that sampling is actually an orthogonal decomposition, and that the sampling value is but a coefficient of decomposition under a set of orthogonal bases. If the original signal belongs to the linear subspace spanned by the orthogonal base of the group, then the signal can be restored without distortion (satisfying the sampling theorem). Have learned the signal processing friend, you know this set of orthogonal base is what. :) The second example is about why Fourier transforms are so important in linear system theory. The answer may be varied, but I think my understanding is more in-depth: The reason is that Furiyeki is the eigenvector of all linear time invariant operators (which is associated with this article). This sentence is more time-consuming to explain, but the Fourier transform can be associated with eigenvector, you must feel interesting. "