1. Algebraic System 1.1 operation law
We already know the concept of a function, which represents a mapping relationship between collections. In most scenarios, the image and the image are often the same set, and the function is discussed here. The unary function \ (F:a\mapsto a\) is also called a transform on the set \ (a\), where the double-shot transformation is also known as displacement . A multivariate function that is generally the following, also known as a \ (n\) operation on a set \ (a\). Set \ (s\) and some of its operations \ (f_1,f_2,\cdots,f_m\) consist of systems called algebraic systems (algebraic system), which can be used without confusion (s\) to represent this algebraic system. Algebraic systems allow us to cast aside specific operands and focus only on the structures and properties they share.
\[f:a\times a\times\cdots\times A\mapsto a\tag{1}\]
Binary operations are the most common operations, such as the addition and subtraction of various objects (numbers, vectors, polynomial, etc.), and the complex operations of transformations. This is the main study of the algebraic system of two-dollar operation, referring to the examples are mainly from number theory and permutation transformation. The following discussion will be more verbose in the analysis of ideas, but these are the essence of abstract algebra, and some of the proof process and results are less important. I hope you can always close your books and reconstruct these theories and experience the thinking of abstract algebra when you are learning.
Let's first simplify the problem and study the algebraic system with only a two-dollar operation. For this operation itself needs to study its formal characteristics, but for the whole algebraic system also need to analyze its structural characteristics. We use a specific notation \ (A\circ b\) to represent the two-dollar operation to be studied \ (f (a, b) \), sometimes abbreviated as \ (ab\), and is said to be "multiplication", the algebraic system is simply recorded as \ (\langle s,\circ\rangle\). If there is another system \ (\langle g,\star\rangle\), there are one by one mappings between them (F:s\mapsto g\), and the following formula is satisfied, then the two systems are called isomorphic (isomorphic), recorded as \ (S\cong g\). It is obvious that isomorphism is an equivalence concept, and that homogeneous algebraic systems can be regarded as exactly the same, without distinction in nature.
\[f (A\circ B) =f (a) \star f (b) \tag{2}\]
From the form of operation, there are two important properties that need to be studied, one is the "superposition" of the operation and the other is the position of the variable. The "overlay" of an operation means that the variable itself is the result of another operation, such as \ ((A\circ b) \circ (C\circ D) \). Most of our research object's operation satisfies the characteristic of the following formula, which is called the associative law of the arithmetic. The binding law is very common in mathematics and is a very basic operational law, and we start here. The binding law essentially says that the operation is related only to the sequence of operands, and not to the order of operations. Intuitively speaking, a string of operations, regardless of the addition of parentheses to limit the order of operations, the results are the same. The algebraic system satisfying the binding law is called semigroup , the nature of semigroup is too simple, it will not have a very special structure, it must be combined with some other properties.
\[(A\circ B) \circ C=a\circ (B\circ c) \tag{3}\]
For many operations, the result of the operation is dependent on the order of the variables, \ (A\circ b\) and \ (B\circ a\) are not necessarily equal, such as permutation and matrix multiplication. Conversely, the following condition is called the commutative law of the operation. We have seen that the Exchange law is not satisfied on many occasions, and thus generally does not assume that it is set up, which we must adapt to the thinking. The Exchange law makes the order of variables no longer important, and the result of its interaction with the binding law is that the results are only related to variables, and their order can be arranged arbitrarily.
\[a\circ B=b\circ A\tag{4}\]
1.2 unit element and inverse element
The first two paragraphs discuss the formal characteristics of the operations themselves, which also make up a very interesting algebraic system, and now require some restrictions on the structure of the system. The structure of the system, of course, is reflected in the relationship between the variable and the result of operation. The first is whether all the elements can become the result, the simplest is to an element \ (a\), we want to exist a number \ (e\) so that the next type at least one, and the best this \ (e\) for all elements are set. Based on this requirement, each element that satisfies the following formula is defined as a left (right) unit. The left (right) unit variable exists, but if it exists, we can have \ (E_l=e_r=e_l\circ e_r\), they are equal! This situation is collectively referred to as the unit element (identity) (obviously the only), and the semi-group containing the unit element is called the unitary semigroups.
\[e_l\circ A=a,\quad A\circ e_r=a\tag{5}\]
The unit element implements our simple goal: any element can be the result of an operation. Now we have a very common request, that is, a formula (6) of a unary once the equation always has a solution. You have to admit that this is also a non-excessive requirement, because once the equation is not solved, the system is very difficult to play around. If the request \ (ax=b\) has a solution, the more intuitive method is to require the two sides can be "divided" \ (a\), or "multiply" \ (a\) of the inverse \ (a_l^{-1}\), get \ (x=a_l^{-1}b\). In other words, there is a need for inverse, respectively, the formula (7) is established. The inverse of satisfying the condition is referred to as left inverse and right inverse element respectively.
\[a\circ X=b,\quad Y\circ a=b\tag{6}\]
\[a_l^{-1}\circ A=e,\quad A\circ a_r^{-1}=e\tag{7}\]
If the left (right) inverse exists at the same time, then \ (A_l^{-1}=a_l^{-1}\circ (A\circ a_r^{-1}) = (A_l^{-1}\circ a) \circ a_r^{-1}=a_r^{-1}\), and they are equal, then the system is called inverse (inverse) (apparently unique). According to the formula (8) It is known that \ (A\) is also the inverse of \ (a^{-1}\), and their operation can be exchanged. It is relatively easy to prove that the inverse of the formula (9) of the nature, this form of everyone is not unfamiliar.
\[a\circ A^{-1}=a^{-1}\circ A=e,\quad (a^{-1}) ^{-1}=a\tag{8}\]
\[(A\circ B) ^{-1}=b^{-1}\circ a^{-1}\tag{9}\]
The existence of the inverse element makes the "division" possible, it makes the system all of a sudden stereo up. The most typical property is that when \ (x\) traverses a group, \ (ax\) (or \ (xa\)) traverses the entire group. Because if \ (ax=ay\), both sides multiplied by \ (a^{-1}\), there is \ (x=y\). This property is called elimination law , if the entire operation is listed as a matrix table, each row and column of the matrix contains the entire group, and there are no duplicate elements. This nature is very important and you will see it later.
2. Group 2.1 groups and subgroups
There is an inverse of the unitary group called the group , our protagonist has come into play. To sum up, the algebraic systems of the three basic properties of the Combined Law, unit element and inverse are groups, generally denoted by the letter \ (g,h,k\). If the Exchange law is satisfied again, it is called the commutative group (commutative group) (or the Abel Group (Abelian Group)). The number of elements in the collection \ (| g|\) is called the order of the group, and obviously there are finite groups and infinite groups. With these three properties, especially the existence of inverse elements, the group has a very interesting structure, which will slowly unfold.
It is worth mentioning that the unit element and the inverse of the condition is actually some redundancy, in many textbooks only requires the group to meet the Union law, the existence of left unit and left inverse element (or right unit element and right inverse element). Now we can prove that they are equivalent to the original definition, that is known to arbitrary \ (a\), existence \ (E_l\circ A=a,a_l^{-1}\circ a=e_l\), the existence of verification \ (e_r,a_r^{-1}\). First remember \ (a ' = (a_l^{-1}) _l^{-1}\), then there \ (A\circ a_l^{-1}= (a ' \circ a_l^{-1}) \circ (A\circ a_l^{-1}), thus \ (=e_l\ A\CIRC Circ (A_l^{-1}\circ a) =e_l\circ a=a\). In this case \ (e_l\) is also the right unit element, which is known by the previous discussion as the unit \ (e\). Then again by the \ (A\circ a_l^{-1}=e_l=e\) know \ (a_l^{-1}\) or right inverse yuan, so there is inverse (a^{-1}\).
It is also important to note that the equation (6) has no concept of unit element and inverse in the law, and that there is no equivalence between them and the inverse. It is not necessarily true, but in some cases it is equivalent, consider the following questions.
? The semigroups that satisfy the equation (6) are groups; (Hint: proof unit element and inverse existence)
? The finite semigroups which satisfy the left and right elimination laws are group. (Hint: Use the conclusion of the previous question)
The example of a group is very common, and it is obvious that there are any number system additions, positive multiplication, addition and multiplication of matrices. Again, for example, the transformations mentioned above, and the multiplication of the so-called residual systems we see in the elementary number theory, are easy to prove that they are groups. There are also some well-known groups, the number of elements is very small, but the structure is not simple, the application is very wide. For example, the famous four-tuple \ (\{\pm 1,\pm i,\pm j,\pm k\}\), which satisfies the operation law of the following table, which is the unit of four yuan, is more general than the plural number system (may be introduced later).
|
\ (1\) |
\ (i\) |
\ (j\) |
\ (k\) |
\ (1\) |
\ (1\) |
\ (i\) |
\ (j\) |
\ (k\) |
\ (i\) |
\ (i\) |
\ ( -1\) |
\ (k\) |
\ (-j\) |
\ (j\) |
\ (j\) |
\ (-k\) |
\ ( -1\) |
\ (i\) |
\ (k\) |
\ (k\) |
\ (j\) |
\ (-i\) |
\ ( -1\) |
There is the following Klein four-tuple \ (k_4=\{1,i,j,k\}\), all the groups presented in this post are typical examples of follow-up discussions, and you need to savor their features and bring them into the discussion later.
|
\ (1\) |
\ (i\) |
\ (j\) |
\ (k\) |
\ (1\) |
\ (1\) |
\ (i\) |
\ (j\) |
\ (k\) |
\ (i\) |
\ (i\) |
\ (1\) |
\ (k\) |
\ (j\) |
\ (j\) |
\ (j\) |
\ (k\) |
\ (1\) |
\ (i\) |
\ (k\) |
\ (k\) |
\ (j\) |
\ (i\) |
\ (1\) |
Having said so much, we have only defined the group, and the future task is to study its structure so that it can get useful properties. The most common method of structural analysis, of course, is decomposition, the large complex object decomposition into a simple small object, the structure is naturally clear. In the same vein, we also hope that the group will be disassembled into smaller groups with simpler structures, and this goal will permeate the whole group theory. It is natural for us to define this "small group" first, which must be a subset of groups, and be able to swarm independently under the same operation, such subsets are called subgroups (subgroup).
If \ (h\) is a subgroup of \ (g\), it is generally written as \ (H\leqslant g\), apparently \ (\{e\}\) and \ (g\) are subgroups of \ (g\), which are also called trivial subgroups. If \ (h\neq g\), \ (h\) is called the true subgroup of \ (g\) (proper subgroup), recorded as \ (h<g\). Since the subgroups fully inherit the parent group operation, the binding law must be satisfied, and the unit and inverse elements will not change. The only requirement is that the sub-group is not incomplete, the elements (unit and inverse) must have, operations in the subgroup to be closed. Now we have to write these conditions into expressions to give a strict definition of subgroups. For a non-empty set \ (h\) of \ (g\), it is a subgroup of \ (g\) If the condition in the formula (10) is satisfied. It is also easy to prove that these three conditions are in fact equivalent to the condition of the formula (11), which is generally used as the determinant condition of the subgroup.
\[h\leqslant g\quad\leftrightarrow\quad e\in h\:\wedge\:(\forall a\in h\rightarrow a^{-1}\in H) \:\wedge\:(\forall a,b\ In H\rightarrow ab\in H) \tag{10}\]
\[h\leqslant g\quad\leftrightarrow\quad\forall a,b\in h\rightarrow ab^{-1}\in h\tag{11}\]
What if subsets \ (m\) do not meet the conditions of the subgroup? Of course you can fill in the necessary elements, and finally meet the conditions of the subgroup called the Generation subgroup , recorded as \ (\langle m\rangle\). Of course, you can give the exact definition of the generated subgroup: the m\ of the group. Only one element \ (a\) generated subgroup is called the cyclic group \ (\langle a\rangle\) (cyclic group), \ (a\) is called its generator (generator). It is obvious that the integer dabigatran and the residual system with the original root are all cyclic groups, and the cyclic group is obviously a commutative group.
2.2 Cycle Group
Although subgroups are defined, the task of decomposing groups is still heavy, so let's take a break from the simplest cyclic group study. A loop group is nothing more than an element like this: \ (\cdots,a^{-1}a^{-1},a^{-1},e,a,aa,\cdots\). We can introduce exponential notation \ (a^n\) to represent each element in a cyclic group, and you can prove that it fully satisfies the general nature of the exponent (equation (12) (13)).
\[a^0=e,\quad a^n=a^{n-1}a,\quad a^{-n}= (a^{-1}) ^n= (a^n) ^{-1}\tag{12}\]
\[a^{m+n}=a^ma^n,\quad a^{mn}= (a^m) ^n\tag{13}\]
In any group, if there is a minimum \ (n>0\) of Make \ (a^n=e\), then the name \ (n\) is the order of \ (a\), which is recorded as \ (|a|\). If there is no such \ (n\), then the order of the name \ (a\) is infinite, also remembered as \ (|a|\). The nature of the order and the nature of the index discussed in the elementary number theory are exactly the same, here is not to repeat, you need to look back.
In a cyclic group \ (\langle a\rangle\), if \ (|a|=n\), it is obviously isomorphic to both the residual system with the original root: \ (a,a^2,\cdots,a^n\), and has a \ (\varphi (n) \) generator. When the order of \ (a\) is infinite, it is isomorphic to an integer addition group: \ (\cdots,a^{-2},a^{-1},e,a,a^2,\cdots\), where only \ (a,a^{-1}\) two generators. The following are some of the order and subgroup of the exercise, difficult, but quite thinking value:
? Finite subset \ (h\) is a subgroup of the necessary and sufficient conditions are: for any \ (a,b\in h\), there is always \ (Ab\in h\);
? Verification: \ (|a|=|a^{-1}|=|cac^{-1}|\), \ (|ab|=|ba|\), \ (|abc|=|bca|=|cab|\);
? Verification: There are even several elements in the finite group with the order greater than \ (2\);
? If \ (h<g\), verify \ (\langle g-h\rangle=g\).
2.3 Permutation Group
Having said the simplest group, now look at the most "complete" group. In the front we see that any element \ (a\) in Group \ (g\) causes \ (ag\) to traverse the entire group, which corresponds to a double-shot transformation on \ (a\) and \ (g\). It is easy to prove that all g\ transformations on the set \ (S (g) \) compose a group, and \ (g\) is a subgroup of \ (s (g) \). Generally, the group \ (S (M) \) called \ (m\) is the symmetric group (symmetric group) on the set \ (m\) of all the double-shot transformations. When \ (| g|=n\), can also be remembered as \ (s_n\), called \ (n\) sub-symmetric group. Obviously each \ (n\) Order group is isomorphic to a true subgroup of \ (s_n\), and the Order of infinite groups is isomorphic to a true subgroup ( Gloria theorem ) of \ (S (G) \).
In this way, we can study the general group by discussing the subgroups of the symmetric group. The subgroups of the symmetric group are called permutation groups (permutation group) (because the elements are permutations), and the subgroups of \ (s_n\) are called \ (n\) times permutation groups, here we only discuss the \ (n\) permutation group. The elements in the collection are numbered with \ (1,2,\cdots,n\), and each permutation \ (\sigma (x) \) can be represented as the next, and changing the order of the columns does not change the definition.
\[\begin{pmatrix}1&2&\cdots&n\\\sigma (1) &\sigma (2) &\cdots&\sigma (n) \end{pmatrix}\tag{ 14}\]
Examine the mapping sequence in the permutation: \ (1,\sigma (1), \sigma (\sigma (1)), \cdots\), it is easy to prove that the sequence will eventually return to \ (1\), which forms a loop. Obviously any permutation is a combination of several disjoint loops, and it is necessary to continue the study. Each loop can actually be seen as a permutation, except that the values outside the loop are mapped to itself. If there is a common \ (k\) element on the loop, such permutation is called \ (k\)- Circular displacement (or \ (k\)-loop), in particular, \ (2\)-The loop is also called swap. The cyclic permutation can be expressed as the following, where \ (\sigma (a_k) =a_1,\sigma (a_i) =a_{i+1}\), its order is obviously \ (k\).
\[\sigma= (a_1a_2\cdots a_k) = (a_2a_3\cdots a_1) =\cdots= (a_ka_1\cdots a_{k-1}) \tag{15}\]
Thus, any permutation can be uniquely decomposed into the product of several disjoint loops. In addition, it is clear that the product of the disjoint loop is commutative, so the order of substitution decomposition into a loop can be arbitrary. In addition, it is easy to establish the following formula, that is, the cycle can be decomposed into a series of products (non-exchangeable), so any permutation can be decomposed into a series of products. This place you need to figure out that the nature of the permutation, the exchange is mapping, not the direct operation of the logarithm, otherwise you will find the following formula confusing (as opposed to what you might expect).
\[(a_1a_2\cdots a_k) = (A_1a_k) (a_1a_{k-1}) \cdots (a_1a_2) \tag{16}\]
At this point can not be decomposed, we cannot help but want to ask, if a permutation has a different decomposition for the exchange of the method, then their number of swap what is the relationship? Now you need a fixed value to connect them, and this value can only be done from the permutation \ (\sigma\) itself. For number pairs \ (i<j\), if \ (\sigma (i) >\sigma (j) \), the name \ (i,j\) is an inverted order. The total inverse ordinal is fixed, and the permutation that defines an odd number of reverse orders is a singular permutation, otherwise called even permutation. You can prove that any swap multiplied by the permutation will change its parity. From the above decomposition, we can know that any permutation is derived from the identity transformation and a series of swaps, so that the odd and even the number of different decomposition of the parity is necessarily equal.
Parity is a symbolic property of permutations, and their parity changes after multiplying are the same as those of positive and negative symbols. A value that is multiplied by a singular permutation, you can pair the even permutation with the odd permutation one by one so that they are each half. It is also easy to see that all of the even permutation operations are closed, so they can form a group called the \ (n\) sub- staggered group (alternating group), recorded as \ (a_n\). Consider the following questions:
? Verification \ (\sigma\tau{\sigma}^{-1}=\begin{pmatrix}\sigma (1) &\sigma (2) &\cdots&\sigma (n) \\\sigma (\tau (1)) &\sigma (\tau (2)) &\cdots&\sigma (\tau (n)) \end{pmatrix}\);
? Verification \ (\{), (\cdots (1n) \}\) and \ (\{(), (12\cdots N) \}\) are the generation systems of \ (s_n\).
"Abstract algebra" 02-Algebra and Group