Clairvoyant-Linear algebra-matrix theory
Book recommendations:
Linear algebra: Domestic I think Li Shangzhi's linear algebra and blue to the high-generation concise tutorial is very good, the concept of explanation is very easy to understand, the study of computational skills is recommended to read the Xu Yisu linear algebra and Matrix theory (second Edition), there are legendary hole-punching skills. 龚晟 wrote a small book, "linear Algebra Five", the view is very high, reading requires a certain algebraic basis.
The best books abroad I think is Strang linear Algebra and its applications the latest is the third edition, this book may be excited to sleep before bedtime, which has Houzhiyu translation of the 2nd edition of the translated version is called Linear algebra and its application. Strang in the MIT lectures as a Introduction to Linear Algebra, can not find the electronic version, domestic in recent years, the introduction of David C lay Linear Algebra and its Applications and Leon's L Inear Algebra with applications are good.
Recently read David.poole's linear Algebra content is similar to Lay's book, but the explanation should be clear, is a rare good book.
Foreign linear algebra books are basically combined with some numerical analysis problems, but also the domestic book is not often speaking of svd,lms, and sometimes speak a little pseudo-inverse, the general combination of application, speaking very good, also let people feel linear algebra is very beautiful.
Matrix theory:
Meyer C.D's matrix analysis and applied linear algebra are understood and can be used as a transitional book for linear algebra to matrix theory.
Zhang Xianda's matrix analysis and application, and HORN,R.A, is a reference manual that is often turned upside down.
Fang Pao's matrix book has several chapters that are good, such as the generalized inverse chapter.
Cheng's matrix theory has gone to the 3rd edition (and the 2nd edition is not very different), is the Cobb reference Book of many schools, I think general.
Matrix calculation:
Watkins D. Fundamentals of Matrix computations the easiest and best-looking matrices for computing books, don't miss it!
GENE. H.golub matrix calculation, classic classics, on-line evaluation.
LLOYD N.trefethen's numerical LINEAR ALGEBRA is also a good book for insomnia, and people also introduced James W.demmel's applied numerical LINEAR ALGEBRA.
G.W Stewart has a two-volume matrix algorithms, if you want to delve into an algorithm, it should be worth a turn, Stewart's matrix calculation introduction, although a bit old, but also a good introductory book.
Statistical matrix theory:
is a matrix of books with statistics, want to seriously study the possibility of statistics around the matrix this piece, there are the following:
David A. Harville's matrix Algebra from A Statistics
James E. Gentle's Matrix Algebra theory, computations, and applications in Statistics.pdf
George A. F. Seber's A Matrix Handbook for statisticians (Wiley Series in probability andstatistics)
C.R Rao's linear statistical inference and its application The first chapter has a lot of knowledge of the matrix needed in statistics, and Nu and Chen Shiyin wrote a tutorial material-"linear statistics and linear algebra reference materials".
Other:
Need to learn more about the generalized inverse of the current relatively new Ben-israel generalized inverses (Springer), 2003 out of 2 version, the first has a translation, the library should be able to find, I remember C.R Rao also has a generalized inverse monograph, but has not seen.
Bellman's older introduction-Matrix analysis Online has an electronic version.
Online there is a teaching video of Southeast University-"Engineering matrix Theory", listening is also good, is the linear space and some basic things speak more, the matrix is too little.
Clairvoyant--the explanation of dry goods
Original address
In the recent review of Matrix theory, it is a pile of theorems and proofs of the sudden discovery of such a constant time of matrix theory, linear algebra, remembering just a bunch of inexplicable theorems, some of the essential things are not clear.
For example, why should have a matrix, it is just a bunch of number of combinations, the set is also a combination of numbers, why not replace the matrix?
What are the meanings of eigenvalues and eigenvectors? What is the description of "feature"?
What is the meaning of matrix multiplication?
Where is the similarity of similar transformations reflected?
What does the determinant mean? Why are there such "weird" arithmetic rules?
The following 3 articles are online search, feel more clear and easy to understand ~~~~~
1: Understanding the Matrix
Http://blog.csdn.NET/is01sjjj/archive/2008/09/03/2874132.aspx
2: Geometric meaning of eigenvectors
Http://blog.csdn.Net/lfkupc/archive/2009/09/17/4561564.aspx
By the way, make up your own understanding,
The definition of a feature vector is that Ax =λx,a is a representation of a linear map under a set of bases.
Left: Ax is a linear transformation of X
Right: Λx can be understood not to change the direction of X (not including its inverse), only to do a certain stretch of X, the tensile multiple is λ, or it can be understood as the simplest linear transformation: multiply transform
In summary: The eigenvector λ is such a group (* not a) special vector, they can not change the direction of the action of linear transformation A, only change the length
The so-called "characteristics", my understanding is:
Because eigenvectors have good properties--not changing direction under linear transformations--so that they can be used as a set of reference systems (which can also be understood as coordinate systems), using this set of reference systems to characterize the variations of other vectors under this linear transformation, which can be expressed linearly with this set of vectors.
This function is mathematically expressed as the law of the spectrum--
Spectral law: A linear transformation (represented by a matrix multiplication) can be represented as a linear combination of all its eigenvectors, where the linear coefficients are the corresponding eigenvalues of each vector, the following formula:
Further, "transformation" can be understood as a movement-a point to another point, and "movement is relative", need to have a frame of reference. The eigenvector is the set of reference
3: on Wikipedia,
Http://zh.wikipedia.org/zh/%E7%89%B9%E5%BE%81%E5%90%91%E9%87%8F
Http://zh.wikipedia.org/zh-cn/%E8%A1%8C%E5%88%97%E5%BC%8F
-------------------------------------------------------------------------------
First, understanding the matrix one or two, three (transferred from Meng Yan blog)
A
Not long ago Chensh for ulterior purposes, to act as a teacher and teach others linear algebra. So I was caught. Some of the retreats in linear algebra have been discussed with him several times. Obviously, Chensh felt that it would be more difficult for him to not be considered insane by the strong student when speaking linear algebra.
Poor Chensh, who let you go to this landmine array?! The color makes the wisdom faint!
The linear algebra course, whether you start from the determinant or directly from the matrix, is riddled with inexplicable beginnings. For example, in the national General Engineering Department of teaching the most widely used in Tongji linear algebra textbook (now to the fourth edition), one to introduce the reverse number of this "unprecedented, no one," the eccentric concept, and then use the reverse number to give the determinant of a very non-intuitive definition, followed by some simply silly determinant of the nature and exercise-- Add this line to another row by a factor, and then reduce the column, tossing it to call a lively, but it is not at all to see what the use of this thing. Most of the mediocre students like me here are a little dizzy: even this is a vague thing, began to drill the fire ring performance, it is too "unreasonable" it! So people started skipping classes, and many more began to copy their homework. This is the middle of the move, because the subsequent development can be described with a twist, followed by this nonsense determinant, is an equally unreasonable but the greatest of all the guy's appearance-The matrix came! Years later, I realized that when the teacher foolishly with the square brackets to a bunch of silly, and slowly said: "This thing is called The Matrix", my math career opened how tragic and bitter, the scene of a very tragic! Since then, in almost everything with the word "learning" a little bit of the edge of things, the matrix this guy never absent. For me, this is not a time to deal with linear algebra of the idiot, The Matrix boss of the uninvited often make me disgraced, badly beaten. For a long time, I read in the matrix, just as Ah Q saw false foreign devil, rubbing forehead on detour.
In fact, I'm not an exception. General engineering students often find it difficult to beginner linear algebra. This situation is at home and abroad. "If you are not familiar with the concept of linear algebra and want to learn the natural sciences, it is now almost as illiterate," said Swedish mathematician Lars Garding in his famous encounter with mathematics. ", however," according to the current international standards, linear algebra is expressed by axiomatic, it is the second generation of mathematical models, ..., which brings difficulties in teaching. "In fact, when we began to learn linear algebra, unconsciously entered the" second-generation mathematical model "category, which means that the mathematical expression and abstraction has a comprehensive evolution, for the childhood has been in the" first-generation mathematical model ", that is, practical-oriented, concrete mathematical model of the study of us, It is strange to have such a drastic paradigm shift without the explicit notice.
Most engineering students tend to learn a number of follow-up courses, such as numerical analysis, mathematical programming, and matrix theory, before they are able to understand and skillfully use linear algebra. Even so, many people, even if they can skillfully use linear algebra as a tool for scientific research and application work, but for many of the course of the beginner's proposed, seemingly very basic problems are not clear. For example:
* What exactly is a matrix? A vector can be thought of as a representation of an object with N Independent properties (dimensions), and what is a matrix? If we think that the matrix is an expansion of a new composite vector consisting of a set of column (row) vectors, why is this kind of expansion so widely used? In particular, why is the two-dimensional expansion so useful? If each element in the matrix is a vector, is it more useful if we expand it again and become a three-dimensional square?
* Why is the multiplication rule of matrices so stipulated? Why would such a bizarre multiplication rule have such a huge effect in practice? A lot of seemingly unrelated questions, and finally all boils down to the multiplication of matrices, isn't that a wonderful thing? Is there some fundamental law of the world under the rule of matrix multiplication that seems inexplicable? If so, what are these essential laws?
* What exactly is a determinant? Why is there such a strange rule of calculation? What is the relationship between determinant and its corresponding phalanx in nature? Why only the Phalanx has the corresponding determinant, and the general matrix does not (do not think this problem is stupid, if necessary, for m x n matrix definition determinant is not to do, the reason why not do, because there is no need, but why not this necessary)? Moreover, the determinant of the calculation rules, it seems that the matrix of any calculation rules are not intuitive contact, why in many ways determine the nature of the matrix? Is this all just a coincidence?
* Why the matrix can be divided into block calculation? Chunking calculation This thing seems so random, why is it feasible?
* For Matrix transpose operation at, there is (ab) T = Btat, for matrix inverse A-1, there is (AB)-1 = b-1a-1. Why is there a similar nature to the two operations that seem to have nothing to do with it? Is this just a coincidence?
* Why does P-1ap get a matrix that is "similar" to a matrix? What do you mean by "similarity" here?
* What is the nature of eigenvalues and eigenvectors? Their definition is surprising, because Ax =λx, the effect of a big matrix, is actually a little bit of a small number λ. But what is the definition of "characteristic" or even "intrinsic"? What exactly are they carved out of?
Such a problem often makes it difficult for people who have been using linear algebra for many years. It is as if adults face the children's inquisitive, and finally always forced to say, "This is it, so far", in the face of such a problem, many veteran can finally only use: "is so stipulated, you accept and remember the good" to stall. However, if such a question cannot be answered, linear algebra is a rude, unreasonable, inexplicable set of rules for us, and we will feel that we are not learning a subject, but are being without any explanation "thrown" into a compelling world, driven only by the whip of exams, It is utterly impossible to appreciate the beauty, harmony and unity of the two. Until many years later, we have found that this knowledge is so useful, but still very confused: why so coincidence?
This, I think, is the consequence of the intuitive loss of our linear algebra teachings. These questions about "how to" and "how to" can only be answered by purely mathematical proofs, which cannot satisfy the questioner. For example, if you demonstrate that the matrix block operation is feasible through the general proof method, this does not allow the questioner's doubts to be solved. Their real confusion is: Why is the matrix chunking operation feasible? Is it just coincidence, or is it necessarily determined by the nature of the object of the matrix? If this is the latter, then what is the nature of the matrix? Just a little consideration of the above questions, we will find that all these problems are not purely based on mathematical proof can be solved. Like our textbooks, everything is mathematically proven, and the students who are finally trained can only use the tools skillfully, but lack a real sense of understanding.
Since the rise of the French Bourbaki school in the 1930 's, the axiomatic and systematic description of mathematics has been a great success, which has greatly improved the rigor of the mathematics education we receive. However, a controversial side effect of mathematical axiomatic is the loss of intuition in general mathematics education. Mathematicians seem to think that intuition and abstraction are contradictory, so do not hesitate to sacrifice the former. However, many people, including myself, are skeptical about this, and we do not think that intuition and abstraction must contradict each other, especially in math education and math textbooks, to help students build intuition and help them understand the abstract concepts and understand the nature of mathematics. Conversely, if you pay attention to formal rigor, students as if forced to drill the fire ring in the performance of the mice, like the dry rules of slavery.
As for linear algebra, some of the intuitive questions mentioned above, I have been thinking over and over for more than two years, and have read several books about linear algebra, numerical analysis, Algebra, and 四、五次, including the former Soviet masterpiece Mathematics: its contents, methods and meanings, and Professor Gong Sheng's The five-spoke of linear algebra, the aforementioned encounter with Mathematics ("Mathematical Overview") and Thomas A. Garrity's "mathematical supplements" all gave me great inspiration. But even so, my understanding of the subject has gone through several self-denial. For example, some of the conclusions of previous thinking have been written in their own blog, but now it seems that these conclusions are basically wrong. Therefore, I intend to make a complete record of my present understanding, on the one hand, because I feel that the present understanding is more mature, can be taken out to discuss with others, to others to consult. On the other hand, if there is further understanding in the future, to overthrow the present understanding, then the snapshot now written is also very meaningful.
Because of the intention to write more, so will be a few times slowly write. Also do not know whether there is time to write a complete, will not interrupt, write to see it.
--------------------------------------------------------------------------
Let's talk about the understanding of several core concepts of linear space and matrices today. Most of these things are written in their own understanding, basically not transcription, there may be mistakes in the place, hoping to be pointed out. But I want to be intuitive, that is, to say the real problem behind the math.
First of all talk about space, this concept is one of the lifeblood of modern mathematics, starting from the topological space, step by step up the definition, can form a lot of space. Linear space is still relatively elementary, if the norm is defined inside, it becomes the normed linear space. The normed linear space satisfies the completeness, it becomes the Banah space, and the defined angle in the normed linear space has the inner product space, the inner product space satisfies the completeness, and the Hilbert space is obtained.
In short, there are many kinds of space. If you go to see a mathematical definition of a space, it's roughly "there is a set that defines a certain concept on this set, and then satisfies some nature", which can be called space. This is a bit strange, why use "space" to call some of these collections? As you will see, this is actually very reasonable.
We are most familiar with the space, there is no doubt that we live in the (according to Newton's absolute space-time view) of the three-dimensional space, mathematically speaking, this is a three-dimensional Euclidean space, we do not care so much, first look at what we are familiar with such a space some of the most basic characteristics. Think about it and we'll know that this three-dimensional space: 1. Consists of a number of (in fact, infinitely multiple) position points; 2. There is a relative relationship between these points; 3. Length, angle, 4 can be defined in space. This space can accommodate movement, and here we call the movement from one point to another point of movement (transformation), rather than the "continuous" movement of the meaning of calculus,
Of these properties above, the most critical is the 4th article. 1th, 2 can only be said to be the basis of space, not the characteristics of space-specific, all the discussion of mathematical problems, there must be a set, most of them have to define some structure in this set (relationship), not to say that with these even space. And the 3rd is too special, the other space does not need to have, is not the key nature. Only the 4th is the nature of space, that is to say, accommodating motion is the essential feature of space.
Recognizing this, we can extend our understanding of three-dimensional space to other spaces. In fact, whatever the space, it must accommodate and support the rule-conforming Motion (transformation) in which it occurs. You will find that in some kind of space there will always be a relative transformation, such as topological space has topological transformations, linear space in the linear transformation, affine space has affine transformations, in fact, these transformations are only the corresponding space allowed in the form of motion.
So, as long as you know, "space" is a collection of objects that accommodate motion, whereas transformations specify the motion of the corresponding space.
Let's take a look at the linear space. The definition of a linear space is available in any book, but since we recognize that the linear space is a space, then there are two basic questions that must be solved first:
1. Space is a collection of objects, the linear space is also a space, so is also a collection of objects. So what is the linear space of a collection of objects? Or, what do objects in linear space have in common?
2. How is motion in a linear space expressed? That is, how is the linear transformation represented?
We first answer the first question, the answer to this question is not to beat around the bush, can be straightforward to give the answer. Any object in a linear space can be expressed as a vector by selecting the base and coordinate methods. The usual vector space I won't say, give two less mundane examples:
L1. The whole of a polynomial with a maximum of not more than n times constitutes a linear space, meaning that each object in this linear space is a polynomial. If we take x0, x1, ..., xn as the base, then any such polynomial can be expressed as a set of n+1-dimensional vectors, in which each component AI is actually the factor of the X (i-1) term in the polynomial. It is worth noting that the selection of the base has a variety of methods, as long as the selected group of the basis of the linear independent can be. This is going to use the concept mentioned later, so let's not say it, just mention it.
L2. The whole of the N-order continuous micro-function on the closed interval [a, b] constitutes a linear space. In other words, each object in this linear space is a continuous function. For any one of the continuous functions, according to the Weierstrass theorem, it is possible to find a polynomial function of the highest order not greater than N, making it 0 the difference from the continuous function, that is, exactly equal. In this way, the problem boils down to L1. There's no need to repeat it back.
So, vectors are very powerful, so long as you find the right base, you can use vectors to represent any object in a linear space. Here the head great article, because the vector surface is only a column number, but in fact because of its order, so in addition to these numbers themselves carry the information, but also in each number of the corresponding location to carry information. Why are arrays the simplest and most powerful in programming? This is the root cause. This is another question, and we won't say it here.
To answer the second question, the answer to this question relates to one of the most fundamental questions of linear algebra.
The motion in a linear space is called a linear transformation. In other words, you can move from one point in a linear space to any other point, which is accomplished by a linear change. So what does a linear transformation mean? Interestingly, in a linear space, when you select a group of bases, you can not only use a vector to describe any object in the space, but you can use a matrix to describe any motion (transformation) in that space. The way to make an object correspond to motion is to use the matrix representing that movement multiplied by the vector representing that object.
In short, after selecting a base in a linear space, the vector depicts the object, and the matrix depicts the motion of the object, which is applied by the multiplication of matrices and vectors.
Yes, the essence of the matrix is the description of motion. If someone later asks you what the matrix is, then you can tell him loudly that the essence of the matrix is the description of the motion. (Chensh, say you!) )
But how interesting is it that the vectors themselves can also be viewed as n x 1 matrices? It is really fascinating that the object and movement in a space can be expressed in a similar way. Can you say it's a coincidence? If it were a coincidence, it would be a lucky coincidence! It can be said that most of the wonderful properties of linear algebra are directly related to this coincidence.
Then understand the matrix.
In the previous article, "The matrix is a description of the movement", so far, it seems that everyone has no opinion. But I believe in the morning and evening there will be a mathematical department of the Netizen to make the decision turn. Because the concept of motion is associated with calculus in mathematics and physics. When we learn calculus, there will always be someone to tell you, elementary mathematics is the study of constant mathematics, is the study of static mathematics, higher mathematics is a variable of mathematics, is the study of the movement of mathematics. Everyone is word of mouth, and almost everyone knows it. But people who really know what the meaning of this saying is, don't seem to have much. In short, in our human experience, motion is a continuous process, from point A to point B, even the fastest light, which takes a time to point through the path between AB, leads to the concept of continuity. And continuous this thing, if do not define the concept of limit, can not explain at all. The ancient Greeks of mathematics is very strong, but is the lack of the concept of limit, so can not explain the movement, was the famous paradox of Zeno (flying Arrows, Scud Achilles ran but the tortoise and so on four paradoxes) made a lot of people. Because this article is not about calculus, so I will not say more. Interested readers can take a look at the "relive calculus" written by the professor of civil friends. I just read the beginning of the book, only to understand that "advanced mathematics is the study of the Movement of mathematics," the truth of the sentence.
However, in my article "Understanding the Matrix", the concept of "movement" is not a continuous movement in calculus, but a change that occurs instantaneously. For example, this moment at point a, after a "movement", suddenly "jump" to the b point, which does not need to go through a point and B point between any point. Such "movement", or "jump", is a violation of our daily experience. But those who know a little about quantum physics will immediately point out that quantum (for example, electrons) jumps in different energy-level orbits, which happen instantaneously and have such a transition behavior. Therefore, there is no such movement phenomenon in nature, but we cannot observe it on the macroscopic. But anyway, the word "movement" is used here, it is easy to produce ambiguity, more precisely, should be "jump". So this sentence can be changed to:
"Matrices are the description of transitions in linear spaces."
But this is too physical, that is, too specific, but not enough mathematics, that is, not enough abstraction. So we end up with a genuine mathematical term--the transformation--to describe the thing. In this case, we should understand that the so-called transformation is actually the transition from one point (Element/object) to another (element/object) in space. For example, a topological transformation is a transition from one point to another in a topological space. For example, affine transformations are transitions from one point to another in affine space. Incidentally, this affine space is a brother with vector space. As a friend of computer graphics knows, although describing a three-dimensional object requires only three-dimensional vectors, all computer graphics transformation matrices are 4 x 4. For the reasons, many of the books are written "for the convenience of use", which in my opinion is simply an attempt to muddle through. The real reason is that the graphical transformations applied in computer graphics are actually carried out in affine space rather than in vector space. Think of the vector space in a vector parallel to move the same vector is still the same, and the real world, such as two parallel segments of course can not be considered the same thing, so the living space of computer graphics is actually affine space. The matrix representation of affine transformations is 4 x 4 at all. Again, interested readers can go to see "computer graphics-geometric tools algorithm detailed".
Once we understand the concept of "transformation," the definition of the matrix becomes:
The matrix is a description of the transformations in the linear space. ”
So far, we have finally got a definition of what looks like comparative mathematics. But just a few more words to say. It is generally said in the textbook that a linear transformation t in a linear space V can be expressed as a matrix when a set of bases is selected. So let's also make it clear what is a linear transformation, what is a base, and what is called a set of bases. The definition of a linear transformation is simple, with a transform T, which allows for any two different objects X and Y in the middle of a linear space V, and any real numbers a and B, as follows:
T (ax + by) = at (x) + BT (y),
Then it is called t as a linear transformation.
Definitions are written in this way, but there is no intuitive understanding of the definition of light. What kind of transformation is a linear transformation? As we have just said, the transformation is to move from one point in space to another, whereas a linear transformation is the movement of one point of a linear space V to another point of another linear space W. There is a meaning in this sentence that a point can not only be transformed to another point in the same linear space, but can be transformed to another point in another linear space. No matter how you change, as long as the transformation is a linear space in the object, the transformation must be a linear transformation, you can certainly use a non-singular matrix to describe. And you use a non-singular matrix to describe a transformation, it must be a linear transformation. Some people may ask, why do we emphasize the non-singular matrix here? The so-called non-singular, only to the square is meaningful, then the situation of non-phalanx how? This will be more verbose, and finally the linear transformation as a mapping, and discuss its mapping properties, as well as the linear transformation of the nuclear and image concepts can be completely clear. I think this is not the point, if you do have time, then write a little later. Here we only explore one of the most common and useful transformations, that is, a linear transformation within the same linear space. That is to say, the matrix below, without explanation, is the square, and is a non-singular square. To learn a learning, the most important thing is to grasp the backbone of the content, the rapid establishment of the whole concept of this knowledge, do not have to start with all the details and special circumstances, from the chaos.
Then, what is a base? The question is to be talked about later, as long as the base is considered a coordinate system in a linear space. Note that the coordinate system, not the coordinate value, is a "contradictory unity". In this way, "Select a group of bases" means that a coordinate system is selected in the linear space. That's what it means.
Well, finally we refine the definition of the matrix as follows:
"Matrices are a description of linear transformations in linear spaces. In a linear space, as long as we select a set of bases, any linear transformation can be described with a definite matrix. ”
The key to understanding this sentence is to distinguish between "linear transformation" and "a description of linear transformation". One is the object, the other is the expression of that object. Just as we are familiar with object-oriented programming, an object can have multiple references, each reference can be called a different name, but all refer to the same object. If the image is not yet, then a very vulgar analogy.
For example, if you have a pig, you want to take a picture of it, as long as you have selected a camera position, you can shoot a picture of the pig. This picture can be seen as a description of the pig, but only a one-sided description, because in a different lens position for the pig to take pictures, can get a separate picture, is another one-sided description of the pig. All these pictures are described by the same pig, but they are not the pig itself.
Similarly, for a linear transformation, as long as you select a set of bases, you can find a matrix to describe the linear transformation. To change a group of bases, you get a different matrix. All of these matrices are a description of the same linear transformation, but they are not linear transformations themselves.
But then, here's the problem. If you give me two pictures of a pig, how do I know that the same pig is on the two pictures? Similarly, you give me two matrices, how do I know if these two matrices are described in the same linear transformation? If it is the same linear transformation of the different matrix description, that is the Clan brothers, meet do not know, not become a joke.
Fortunately, we can find a property of the same linear transformation of the Matrix Brothers, that is:
If matrices A and B are two different descriptions of the same linear transformation (which is different because a different base is selected, i.e. a different coordinate system is selected), then a non-singular matrix p must be found to satisfy the relationship between A and B:
A = p-1bp
Linear algebra A little bit more familiar to the reader to see, this is the definition of the similarity matrix. Yes, the so-called similarity matrix is a different description matrix of the same linear transformation. According to this definition, photos of different angles of the same pig can also be similar photos. Vulgar a little, but can make people understand.
In the above equation, the matrix p is actually a transformation relationship between the base of a matrix and the base of the B-matrix. On this conclusion, I can prove it in a very intuitive way (not the formal proof in the general textbook), and if there is time, I will add this proof in my blog later.
The discovery is too important. Originally a family of similar matrices are the same linear transformation of the description Ah! No wonder it's so important! In the graduate course of engineering, there are some courses, such as Matrix theory, matrix analysis, and so on, which say a variety of similar transformations, such as what is similar to the standard type, diagonal, etc., all require the transformation to get after the matrix and the previous matrix similar, why so required? Because this is the only requirement, the two matrices before and after the transformation are guaranteed to describe the same linear transformation. Of course, the different matrices of the same linear transformation are described, and the actual operation is not a good ring. Some descriptor matrices are much better than other matrix properties. This is easy to understand, the same pig's picture also has a beautiful ugly point. So the similarity transformation of the matrix can transform an ugly matrix into a more beautiful matrix, and the two matrices are guaranteed to describe the same linear transformation.
In this way, the matrix as a linear transformation of the side of the description, basically clear. However, it is not so simple, or linear algebra has a more wonderful nature, that is, the matrix can not only be used as a description of the linear transformation, but also as a group of base description. As a matrix of transformation, not only can one point in the linear space be transformed to another point, but also can transfer one coordinate system (base) table in linear space to another coordinate system (base). Moreover, the transformation point and the transformation coordinate system have the same effect. The most interesting mysteries of linear algebra are contained therein. In understanding these things, many theorems and rules in linear algebra become clearer and more intuitive.
Leave this in the next article and write it again.
Because there are other things to do, the next one may be written in a few days.
Two
Then understand the matrix.
In the previous article, "The matrix is a description of the movement", so far, it seems that everyone has no opinion. But I believe in the morning and evening there will be a mathematical department of the Netizen to make the decision turn. Because the concept of motion is associated with calculus in mathematics and physics. When we learn calculus, there will always be someone to tell you, elementary mathematics is the study of constant mathematics, is the study of static mathematics, higher mathematics is a variable of mathematics, is the study of the movement of mathematics. Everyone is word of mouth, and almost everyone knows it. But people who really know what the meaning of this saying is, don't seem to have much. In short, in our human experience, motion is a continuous process, from point A to point B, even the fastest light, which takes a time to point through the path between AB, leads to the concept of continuity. And continuous this thing, if do not define the concept of limit, can not explain at all. The ancient Greeks of mathematics is very strong, but is the lack of the concept of limit, so can not explain the movement, was the famous paradox of Zeno (flying Arrows, Scud Achilles ran but the tortoise and so on four paradoxes) made a lot of people. Because this article is not about calculus, so I will not say more. Interested readers can take a look at the "relive calculus" written by the professor of civil friends. I just read the beginning of the book, only to understand that "advanced mathematics is the study of the Movement of mathematics," the truth of the sentence.
However, in my article "Understanding the Matrix", the concept of "movement" is not a continuous movement in calculus, but a change that occurs instantaneously. For example, this moment at point a, after a "movement", suddenly "jump" to the b point, which does not need to go through a point and B point between any point. Such "movement", or "jump", is a violation of our daily experience. But those who know a little about quantum physics will immediately point out that quantum (for example, electrons) jumps in different energy-level orbits, which happen instantaneously and have such a transition behavior. Therefore, there is no such movement phenomenon in nature, but we cannot observe it on the macroscopic. But anyway, the word "movement" is used here, it is easy to produce ambiguity, more precisely, should be "jump". So this sentence can be changed to:
"Matrices are the description of transitions in linear spaces."
But this is too physical, that is, too specific, but not enough mathematics, that is, not enough abstraction. So we end up with a genuine mathematical term--the transformation--to describe the thing. In this case, we should understand that the so-called transformation is actually the transition from one point (Element/object) to another (element/object) in space. For example, a topological transformation is a transition from one point to another in a topological space. For example, affine transformations are transitions from one point to another in affine space. Incidentally, this affine space is a brother with vector space. As a friend of computer graphics knows, although describing a three-dimensional object requires only three-dimensional vectors, all computer graphics transformation matrices are 4 x 4. For the reasons, many of the books are written "for the convenience of use", which in my opinion is simply an attempt to muddle through. The real reason is that the graphical transformations applied in computer graphics are actually carried out in affine space rather than in vector space. Think of the vector space in a vector parallel to move the same vector is still the same, and the real world, such as two parallel segments of course can not be considered the same thing, so the living space of computer graphics is actually affine space. The matrix representation of affine transformations is 4 x 4 at all. Again, interested readers can go to see "computer graphics-geometric tools algorithm detailed".
Once we understand the concept of "transformation," the definition of the matrix becomes:
The matrix is a description of the transformations in the linear space. ”
So far, we have finally got a definition of what looks like comparative mathematics. But just a few more words to say. It is generally said in the textbook that a linear transformation t in a linear space V can be expressed as a matrix when a set of bases is selected. So let's also make it clear what is a linear transformation, what is a base, and what is called a set of bases. The definition of a linear transformation is simple, with a transform T, which allows for any two different objects X and Y in the middle of a linear space V, and any real numbers a and B, as follows:
T (ax + by) = at (x) + BT (y),
Then it is called t as a linear transformation.
Definitions are written in this way, but there is no intuitive understanding of the definition of light. What kind of transformation is a linear transformation? As we have just said, the transformation is to move from one point in space to another, whereas a linear transformation is the movement of one point of a linear space V to another point of another linear space W. There is a meaning in this sentence that a point can not only be transformed to another point in the same linear space, but can be transformed to another point in another linear space. No matter how you change, as long as the transformation is a linear space in the object, the transformation must be a linear transformation, you can certainly use a non-singular matrix to describe. And you use a non-singular matrix to describe a transformation, it must be a linear transformation. Some people may ask, why do we emphasize the non-singular matrix here? The so-called non-singular, only to the square is meaningful, then the situation of non-phalanx how? This will be more verbose, and finally the linear transformation as a mapping, and discuss its mapping properties, as well as the linear transformation of the nuclear and image concepts can be completely clear. I think this is not the point, if you do have time, then write a little later. Here we only explore one of the most common and useful transformations, that is, a linear transformation within the same linear space. That is to say, the matrix below, without explanation, is the square, and is a non-singular square. To learn a learning, the most important thing is to grasp the backbone of the content, the rapid establishment of the whole concept of this knowledge, do not have to start with all the details and special circumstances, from the chaos.
Then, what is a base? The question is to be talked about later, as long as the base is considered a coordinate system in a linear space. Note that the coordinate system, not the coordinate value, is a "contradictory unity". In this way, "Select a group of bases" means that a coordinate system is selected in the linear space. That's what it means.
Well, finally we refine the definition of the matrix as follows:
"Matrices are a description of linear transformations in linear spaces. In a linear space, as long as we select a set of bases, any linear transformation can be described with a definite matrix. ”
The key to understanding this sentence is to distinguish between "linear transformation" and "a description of linear transformation". One is the object, the other is the expression of that object. Just as we are familiar with object-oriented programming, an object can have multiple references, each reference can be called a different name, but all refer to the same object. If the image is not yet, then a very vulgar analogy.
For example, if you have a pig, you want to take a picture of it, as long as you have selected a camera position, you can shoot a picture of the pig. This picture can be seen as a description of the pig, but only a one-sided description, because in a different lens position for the pig to take pictures, can get a separate picture, is another one-sided description of the pig. All these pictures are described by the same pig, but they are not the pig itself.
Similarly, for a linear transformation, as long as you select a set of bases, you can find a matrix to describe the linear transformation. To change a group of bases, you get a different matrix. All of these matrices are a description of the same linear transformation, but they are not linear transformations themselves.
But then, here's the problem. If you give me two pictures of a pig, how do I know that the same pig is on the two pictures? Similarly, you give me two matrices, how do I know if these two matrices are described in the same linear transformation? If it is the same linear transformation of the different matrix description, that is the Clan brothers, meet do not know, not become a joke.
Fortunately, we can find a property of the same linear transformation of the Matrix Brothers, that is:
If matrices A and B are two different descriptions of the same linear transformation (which is different because a different base is selected, i.e. a different coordinate system is selected), then a non-singular matrix p must be found to satisfy the relationship between A and B:
A = p-1bp
Linear algebra A little bit more familiar to the reader to see, this is the definition of the similarity matrix. Yes, the so-called similarity matrix is a different description matrix of the same linear transformation. According to this definition, photos of different angles of the same pig can also be similar photos. Vulgar a little, but can make people understand.
In the above equation, the matrix p is actually a transformation relationship between the base of a matrix and the base of the B-matrix. On this conclusion, I can prove it in a very intuitive way (not the formal proof in the general textbook), and if there is time, I will add this proof in my blog later.
The discovery is too important. Originally a family of similar matrices are the same linear transformation of the description Ah! No wonder it's so important! In the graduate course of engineering, there are some courses, such as Matrix theory, matrix analysis, and so on, which say a variety of similar transformations, such as what is similar to the standard type, diagonal, etc., all require the transformation to get after the matrix and the previous matrix similar, why so required? Because this is the only requirement, the two matrices before and after the transformation are guaranteed to describe the same linear transformation. Of course, the different matrices of the same linear transformation are described, and the actual operation is not a good ring. Some descriptor matrices are much better than other matrix properties. This is easy to understand, the same pig's picture also has a beautiful ugly point. So the similarity transformation of the matrix can transform an ugly matrix into a more beautiful matrix, and the two matrices are guaranteed to describe the same linear transformation.
In this way, the matrix as a linear transformation of the side of the description, basically clear. However, it is not so simple, or linear algebra has a more wonderful nature, that is, the matrix can not only be used as a description of the linear transformation, but also as a group of base description. As a matrix of transformation, not only can one point in the linear space be transformed to another point, but also can transfer one coordinate system (base) table in linear space to another coordinate system (base). Moreover, the transformation point and the transformation coordinate system have the same effect. The most interesting mysteries of linear algebra are contained therein. In understanding these things, many theorems and rules in linear algebra become clearer and more intuitive.
Leave this in the next article and write it again.
Because there are other things to do, the next one may be written in a few days.
Three
At the end of the second part, I said:
" A matrix can be described not only as a linear transformation, but as a group of bases. As a matrix of transformation, not only can one point in the linear space be transformed to another point, but also can transfer one coordinate system (base) table in linear space to another coordinate system (base). Moreover, the transformation point and the transformation coordinate system have the same effect. The most interesting mysteries of linear algebra are contained therein. In understanding these things, many theorems and rules in linear algebra become clearer and more intuitive.
Leave this in the next article and write it again.
because there are other things to do, the next one may be written in a few days. "
But this drag is 1.5. Since 1.5, these two rough and presumptuous articles have been reproduced everywhere, so that in Google's search tips, my name and "Matrix" is a pair of related words. This is a frightening thing for me, which has always been a poor student of mathematics. What a brilliant and exquisite knowledge of mathematics! Represents the highest achievement of human wisdom, the language of dialogue between man and God. And I really even the door of mathematics have not gone in, do not talk about what understanding, is a slightly difficult topic I also rarely can untie. What qualifications do I have to talk about an important mathematical concept like matrices? Moreover, my idea is intuitive, not seen is correct ah, will not fraught it? So, forget it, that's it, I think so.
It was the letters that I received that gradually changed my mind.
Since 1.5, I have received no more than 100 direct letters, asking me to write out the latter part of the letter. Most of these letters are domestic netizens and students, there are a few friends who are studying abroad, most of them are encouragement, some sincere requests, there are a few severely scold me for not keeping promises. Whatever the attitude, it shows their encouragement to me for a little bit of thought, especially the perspective of my thinking and the encouragement to try. They let me know in the letter that, although my math level is not high, but I from the perspective of ordinary people (rather than mathematicians), the emphasis on the mathematical concepts and rules of the intuitive understanding of the idea, for many of the benefits. Perhaps this path in mathematics is not the right way, and will not go very far, but in any case, at a certain stage, for some people, compared to the current mathematical textbooks generally adopted ideas, this approach may be easier to understand some. Since it is possible to help some people, then I should not have too many distractions, should continue to think and summarize.
So, here's what you wrote to ask me to write.
Let's begin by summarizing some of the main conclusions of the previous two parts:
1. First there is space, space can accommodate the object movement. A space corresponds to a class of objects.
2. There is a space called linear space, the linear space is to accommodate the vector object motion.
3. Motion is instantaneous and therefore also referred to as transformation.
4. A matrix is a description of motion (transformation) in a linear space.
5. Multiplication of matrices and vectors is the process of implementing motion (transformation).
6. The same transformation, in different coordinate systems behave as different matrices, but their nature is the same, so the eigenvalues are the same.
Clairvoyant-Linear algebra-matrix theory