The combination of Young tables is the most amazing part in algebra combination. It has a rich and profound relationship with theory of theory, statistical mechanics, and probability theory. This article will start with several interesting questions and lead everyone into this beautiful field. There is very little preparation knowledge required. I have learned linear algebra, but it is essential to truly appreciate the beauty of it.
What needs to be illustrated in advance is that the several theorems to be proved in this article are not generic, and they are all famous conclusions in algebra.
Let's take a look at several questions:
There are $ M $ presidential candidates for the election. Each of them has $ \ lambda_1, \ ldots, \ lambda_m $ supporters (assuming only the voters vote for the candidates he supports ), $ \ lambda_1 \ geq \ lambda_2 \ geq \ cdots \ geq \ lambda_m $. Now these voters start to vote and ask: How many different voting orders are there so that any $ I <J $, the number of votes for the first $ I $ candidate is always higher than that for the second $ J $ candidate?
A group of soldiers stood in a square matrix of $ m \ times N $ during the military parade. Their heights were different from each other. Q: How many different arrangement methods can be used to increase the height of each row from left to right and each column from top to bottom?
Heap boxes in a room with a volume of $ A \ times B \ times C $. The stacking method must meet the progressive drop condition: from the stack in the corner, each row is left to right, and the height of each column is decreasing from top to bottom. Q: How many methods can meet the requirements? (Note the difference from the previous question: the number of people there is fixed and the height is different. The number of boxes here is not fixed and the height is strictly decreasing)
Although the question seems elementary, the answer is never simple. In fact, there are many solutions to these three problems, but none of them are "Elementary. In other words, I do not think that non-mathematics professionals (even higher education) can quickly understand and accept their proofs.
The best way to process them in a unified manner is the theory of Schur polynomials. Next we will introduce it.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~
Combination definition of Schur Polynomials
First, we will introduce some common concepts in composite mathematics.
Set $ \ Lambda = (\ lambda_1, \ lambda_2, \ cdots) $ to a sequence where all elements are non-negative integers and only a limited number of elements are not 0, if $ \ lambda_1 \ geq \ lambda_2 \ geq \ cdots $ is true, $ \ Lambda $ is an integer split. Note $ | \ Lambda | =\ sum _ {I = 1} ^ \ infty \ lambda_ I $, $ | \ Lambda | $ is always a finite number, if $ | \ Lambda | = n $, we call $ \ Lambda $ as the split of an integer $ N $, which is recorded as $ \ Lambda \ vdash N $.
For each split $ \ Lambda $, we can use an image $ F _ \ Lambda $ to represent it:
In the figure, $ \ Lambda = (5, 4, 3, 2, 1) $. Note their arrangement rules: the first line has $ \ lambda_1 $ Square, the second line has $ \ lambda_2 $ square, and so on. Each row is left aligned. $ F _ \ Lambda $ is called the split $ \ Lambda $ Ferrers diagram.
If you fill in a number (or variable) in each square of $ F _ \ Lambda $, the resulting figure is called the Young table:
The definition of the young table is too broad: the figure obtained by entering numbers in the Ferrers graph is called the Young table. But we are really interested in the Young tables that meet the constraints. This is the key definition below:
Definition:Set $ T $ to a young table in the shape of $ \ Lambda $. Here, $ \ Lambda \ vdash N $.
If each row of $ T $ is weakly monotonic increasing from left to right, and each column is strictly monotonic increasing from top to bottom, $ T $ is a semi-standard young table.
If $ T $ is a semi-standard young table, and the number filled in is a set of $ \ {1, 2, \ ldots, n \} $, $ T $ is a standard young table.
I think you should have realized that the three questions raised at the beginning of this article are essentially to calculate the number of semi-standard/standard Young tables. What should we do? The method is actually a function generation method we are all familiar with: assign a weight value to each semi-standard/standard young table $ T $, add their weights, and get a weight function, if we can find the function equation/recursive relationship that this weight function satisfies /... and so on. We have learned this trick for a long time. We used it when solving the generic formula of the Fibonacci series, didn't we?
Put the above idea into practice: Set $ T $ to a young table and remember $ W (t) = (W_1 (t), W_2 (t), \ cdots) $, $ w_k (t) $ is the number of $ K $ in $ T $. The vector $ W (t) $ is called the $ T $ weight. We adopt this Convention: If $ \ alpha = (\ alpha_1, \ alpha_2, \ ldots) $ is a sequence of all elements that are non-negative integers (and only a limited number of non-0 values ), notes
\ [X ^ \ alpha = X_1 ^ {\ alpha_1} X_2 ^ {\ alpha_2} \ cdots. \]
Definition [combination definition of Schur polynomials ]:Set $ \ Lambda $ to a split, which defines the polynomials of Infinitely multiple variable elements $ x_1, \ ldots, X_n, \ ldots, $
\ [S _ \ Lambda (x_1, \ ldots, X_n, \ ldots) = \ sum _ {t} x ^ {W (t)}. \]
Here, $ T $ is used to run all the tables $1, 2, \ ldots, N, \ ldots $ and fill in the semi-standard young table obtained by $ F _ \ Lambda $.
Note that the Schur polynomial we define here contains an infinite number of monombies. the number of times each monombies is $ | \ Lambda |$. The coefficients before each monombies are limited.
The combination definition of Schur polynomials is the easiest to understand and accept, but we have no idea what it is. For this reason, we need to find other manifestations of Schur polynomials. First, we will prove that the Schur polynomials are always symmetric polynomials.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~
Bender-knuth
In this section, we will prove that the Schur polynomials are symmetric polynomials. Therefore, we only need to prove any $ I $, $ S _ \ Lambda $ remains unchanged after $ X_ I $ and $ X _ {I + 1} $ are exchanged, and this requires that the permission is $ (W_1, \ ldots, the semi-standard young table and weight of w_ I, W _ {I + 1}, \ ldots $ are $ (W_1, \ ldots, W _ {I + 1}, w_ I, \ ldots) $. For this reason, you only need to replace the number $ I $ in the young table in the shape of $ \ Lambda $ with $ I + 1 $, change $ I + 1 $ to $ I $, right?
The problem is that after this exchange, we may not get a semi-standard young table. If the bottom of a $ I $ is exactly $ I + 1 $, the $ I $ and $ I + 1 $ match. $ I $ without $ I + 1 $ or $ I + 1 $ without $ I $ above is not matched. The following facts are not difficult to verify:
In any row of $ T $, if $ X and Y $ are two unmatched elements, they cannot have matching elements. That is, in a row, unmatched elements always constitute a continuous sequence.
In a row, the unmatched elements are $ r$ $ I $ followed by $ S $ I + 1 $, we will replace this sequence with $ S $ followed by $ r$ $ I + 1 $, and keep the rest of $ T $ unchanged. In this transformation, $ T $ is changed to $ t ^ \ ast $, and $ t ^ \ ast $ is still semi-standard. And if the $ T $ permission is $ (\ ldots, w_ I, W _ {I + 1}, \ ldots) $, the $ t ^ \ ast $ permission is $ (\ ldots, W _ {I + 1}, w_ I, \ ldots) $. This transformation is obviously a pair: $ (t ^ \ AST) ^ \ ast = T $, which proves that the Schur polynomial is symmetric.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~
Determination of Schur Polynomials
Set $ \ Lambda = (\ lambda_1, \ ldots, \ lambda_n) \ In \ mathbb {z} ^ N _ {\ geq0} $, note
\ [A _ \ Lambda = \ det (x_ I ^ {\ lambda_j}) = \ sum _ {\ Sigma \ In s_n} \ Text {SGN} (\ sigma) x ^ {\ sigma (\ lambda )}. \]
Define $ \ rock = (n-1, N-2, \ ldots,) $.
Set $ T $ to a young table in the shape of $ \ MU/\ nu $, and define $ T _ {\ geq J} $ as the $ J $ column of $ T $, column $ J + 1 $... components. Similarly, the meanings of $ T _ {<j }$ and $ T _ {> J} $ are self-explanatory. $ T $ is a "good guy" (compared to the split $ \ Lambda $). If you call $ J $, $ \ Lambda + W (T _ {\ geq J }) \ In \ mathcal {p} _ N $ (monotonic decreasing ). Otherwise, it is called a "bad guy ".
Theorem:\ [A _ {\ Lambda + \ ROV} s _ {\ MU/\ nu} = \ sum _ {t} A _ {\ Lambda + W (t) + \ rock }. \]
Here, sum and run the "Good Guys ".
At first glance, this theorem is not intuitive from narration to proof, but its conclusion can be said to be the most important. This is the Littlewood-chardson law. The following proof has a profound source (the crystal map of Lie algebra ).
Proof: From the proof of Bender-knuth about Schur polynomial symmetry, we know that there is a replacement for any $ \ Sigma \ In s_n $
\ [S _ {\ MU/\ nu} = \ sum _ {T \ In \ mathcal {T }_{ \ MU/\ nu} x ^ {W (t)} = \ sum _ {T \ In \ mathcal {t} _ {\ MU/\ nu} x ^ {\ Sigma \ cdot W (t )}. \]
Therefore
\ [A _ {\ Lambda + \ ROV} s _ {\ MU/\ nu} = \ sum _ {\ Sigma \ In s_n} \ Text {SGN} (\ sigma) \ sum _ {T \ In \ mathcal {T }_{ \ MU/\ nu} x ^ {\ Sigma \ cdot (\ Lambda + \ rock) + W (t) }=\ sum _ {\ Sigma \ In s_n} \ Text {SGN} (\ sigma) \ sum _ {T \ In \ mathcal {T }_{ \ MU/\ nu} x ^ {\ Sigma \ cdot (\ Lambda + \ rock + W (t ))} = \ sum _ {T \ In \ mathcal {t} _ {\ MU/\ nu} A _ {\ Lambda + W (t) + \ rock }. \]
Here, sum all $ T $. We need to prove that the $ T $ of the "bad guys" can be paired with each other. The opposite symbol of the value of the determinant can be offset, and the theorem is proved.
If $ T $ is a bad guy, $ J $ makes $ \ Lambda + W (T _ {\ geq J}) $ not a split, select the largest one among all such $ J $. After $ J $ is selected, because $ \ Lambda + W (T _ {\ geq J}) $ is not split, $ K $ makes
\ [\ Lambda_k + w_k (T _ {\ geq J}) <\ Lambda _ {k + 1} + W _ {k + 1} t _ {\ geq J }. \]
Select the smallest one among all such $ K $.
What can you infer from this information? The situation seems complicated, but it is actually very simple. To help you understand what is going on, consider a scenario where two people, $ A and $ B, are playing games. The winner of each game can earn RMB 1 and the loser can earn RMB 0; if it is flattened, it will cost 1 yuan each. Initial moment $ A $ \ lambda_k $, $ B $ \ Lambda _ {k + 1} $, here $ \ lambda_k \ geq \ Lambda _ {k
+ 1} $. It is known that $ M $ has exceeded $ A $ for the first time after the conclusion of the Bureau. What can you tell?
Obviously, the result of round $ M $ is $ B $'s victory, $ B $'s funds should also be tied to $ A $ at the end of the previous $ m-1 $ round. Simple reasoning, right? This is enough.
Now let's go back to "bad guy" $ T $: We know that each column in a young table has at most one $ K $ (or at most one $ k + 1 $ ), we regard the number of $ K $ and $ k + 1 $ contained in each column as the result of a game: Set $ T $ to have $ M $ columns, the rightmost column describes the results of $ A, B $ in the first round: $ K $; $ A, B $, and so on. In this way, from right to left in the $ J + 1 $ column, according to the $ J $ selection rule, $ A $ is still not less than $ B $, \ [\ lambda_k + w_k (T _ {> J }) \ geq \ Lambda _ {k + 1} + W _ {k + 1} (T _ {> J }), \] But when column $ J $ is reached, the $ A $ fund is exceeded by $ B $, that is, \ [\ lambda_k + w_k (T _ {\ geq J }) <\ Lambda _ {k + 1} + W _ {k + 1} (T _ {\ geq J }). \] according to our previous analysis, this indicates that $ k + 1 $ in the $ T $ column $ J $, but no $ K $, and
\ [\ Lambda_k + w_k (T _ {> J}) =\lambda _ {k + 1} + W _ {k + 1} (T _ {> J }), \]
So we have
\ [\ Lambda_k + w_k (T _ {\ geq J }) + 1 = \ Lambda _ {k + 1} + W _ {k + 1} (T _ {\ geq J }). \]
Now we want to perform the following Transformation for $ T $: keep the $ T _ {\ geq J} $ part unchanged, perform the Bender-knuth transform on the $ T _ {<j} $ part, get a young table $ t ^ \ ast $ (the possibility of $ T = t ^ \ ast $ is not ruled out ). Note: $ T _ {\ geq J} = t ^ \ ast _ {\ geq J} $ and $ T _ {<j} $ and $ t ^ \ ast _ {<j} $ and $ k + 1 $ are exchanged, therefore, the two vectors $ \ Lambda + W (t) + \ rock $ and $ \ Lambda + W (t ^ \ AST) + \ rock $ only returns a check this!) swap between $ K $ and $ k + 1 $ !), Therefore, the two determine factors $ A _ {\ Lambda + W (t) + \ rock} $ and $ A _ {\ Lambda + W (t ^ \ AST) + \ rock} $ is an inverse sign (interchange of two columns of matrix difference ). (In the case of $ T = t ^ \ ast $, two columns in the matrix are the same, and the determinant is 0)
Is this proof OK? No. It also needs to be noted that $ t ^ \ ast $ is changed back to $ T $ in this transformation, rather than other $ t '$, which is a real "Matching and canceling ". However, this issue is clearly left for you to verify.
Inference [bi-alternant formula ]:\ [S _ {\ Mu }=\ frac {A _ {\ Lambda + \ rock }}{ A _ \ rock}. \]
Proof: In the theorem, if $ \ Lambda = \ Nu = 0 $ is specified, only the "good guy" $ T $ in the shape of $ \ Mu $ is used, fill in 1 for the first line, 2 for the second line ,... and so on, and $ W (t) = \ Mu $.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~
KNN-Trudi equality
In this section, we continue to present the third form of Schur polynomial.
Define $ h_k (x_1, \ ldots, X_n) $ as the sum of all $ K $ sub-monominal values:
\ [H_k (x_1, \ ldots, X_n) = \ sum _ {\ begin {subarray} {c} (\ alpha_1, \ cdots, \ alpha_n) \ In \ mathbb {z} ^ N _ {\ geq0} \ alpha_1 + \ cdots + \ alpha_n = k \ end {subarray} X_1 ^ {\ alpha_1} X_2 ^ {\ alpha_2} \ cdots x_n ^ {\ alpha_n }. \]
$ H_0 = 1 $, $ k <0 $ H _ {k} = 0 $. Obviously, each $ h_k $ is a homogeneous symmetric polynomial.
Theorem:
\ [S _ \ Lambda (x_1, \ ldots, X_n) = \ det \ left (H _ {\ lambda_i-i + J} \ right) _ {1 \ Leq I, J \ Leq n }. \]
The proof here uses Gessel-viennot's non-Intersecting grid point path group method, the idea is clear at a glance, the demonstration is simple and direct.
The key is to map each semi-standard young table to an inconsistent path group. If you are familiar with the Lattice Path group method, this construction process is natural.
Set $ T $ to a semi-standard young table in the shape of $ \ Lambda $. Because the first row of $ T $ is monotonous, therefore, it naturally corresponds to a Gauss path from $ (0, 1) $ to $ (\ lambda_1, n) $:
You can understand the first line of $ T $ as "path height". Each number describes the height of the path in the corresponding position. For example, the path height is $ (, 5) $, which is the first line of $ T $.
Similarly, the second line of $ T $ can be understood as a Gauss path from $ (0, 1) $ to $ (\ lambda_2, n) $, the strictly monotonic increasing condition of columns in the semi-standard young table is equivalent to that the path corresponding to the second row should be located above the path corresponding to the first row as a whole:
For example, the number in the second line is $ (3, 4, 4) $. Note that the two paths can have public edges in the vertical direction, but not public edges in the horizontal direction.
Next is the key step: if we shift the second path to a unit on the left, that is, horizontally moving from $ (-) $ to $ (\ lambda_2-1, n) $, the new path and the path corresponding to the first line do not have intersection! (Check this !) As a result, the path corresponding to the third line is shifted to two units on the left ,... the path corresponding to row $ N $ is shifted to the left to $ 1-N $ units. Then, we get an unmatched path group. Its two sets of vertex sets are $ \ {a_ I = (1-I, 1 ), 1 \ Leq I \ Leq n \} $ and $ \ {B _j = (\ lambda_j-j + 1, n), 1 \ Leq J \ Leq n \} $.
(Note that $ T $'s $ L (\ lambda) + 1, \ ldots, and N $ rows are all 0, and their corresponding paths are vertical and upward paths)
It is not difficult to verify that each of the two sets of vertex sets does not have an intersection path group (this path group must be $ a_ I \ rightarrow B _ I $ ), you can restore each row of $ T $ from their "height" value to determine $ T $.
So that the weights of the horizontal edge are $ x_1, \ ldots, X_n $, and the values of the vertical edge are all 1, then the Gessel-viennot theorem with the weight is applied immediately.
\ [S _ \ Lambda (x_1, \ ldots, X_n) = \ det \ left (H _ {\ lambda_i-i + J} \ right) _ {1 \ Leq I, J \ Leq n }. \]
This proves the Markov-Trudi equality.
Note that in this proof, I mainly describe the proof ideas rather than the details. We recommend that you read GTM 238 a course in enumeration.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~
Hook length formula
This section describes and proves the famous hook length formula. The expression of the hook formula is simple and elegant, and the conclusion is unexpected. At first glance, it will be attracted by it. One of the motivations of my initial thoughts on the symmetric group theory was to clarify the hook formula.
First, define the hook length: Set $ \ Lambda \ vdash N $, $ F _ \ Lambda $ to the corresponding Ferrers graph, with $ v = (I, j) $ to represent the square at the position of the $ I $ row in $ F _ \ Lambda $ (only the positions with squares are considered ). The number is the same as $ V $, but the value is on the right of $ V $, and the total number of squares in the same column as $ V $ but under $ V $ ($ V $ is also regarded as one, but only once ). This number is called the hook length of $ V $ and is recorded as $ H_V $.
, The hook length of $ v = () $ is 6, and the calculated squares are marked.
Theorem [Hook formula ]:The number of standard Young tables in the shape of $ \ Lambda $ is
\ [F _ \ Lambda = \ frac {n !} {\ Prod _ {v \ In F _ \ Lambda} H_V}. \]
Here $ n = | \ Lambda | $.
Although the expression of the hook formula is very good, it is not easy to use. We need an expression that looks less intuitive, but uses it. This is the key theorem below:
[Technical lemma ]:Set $ \ mu_ I = \ lambda_ I + r-I $, here $ r = L (\ lambda), 1 \ Leq I \ Leq r$, then
\ [\ Prod _ {v \ In F _ \ Lambda} H_V = \ frac {\ prod _ {I = 1} ^ r \ mu_ I !} {\ Prod _ {I <j} (\ mu_ I-\ mu_j)}. \]
Proof of the theorem: any positive integer $ m \ geq \ lambda_1, n \ geq \ Lambda '_ 1 $, then, the $ \ Lambda $ Ferrers graph can be placed in a rectangle of $ m \ times N $. Its boundary is from $ () $ to $ (m, n) $. Starting from the lower left corner, the edges on the path are marked as $, \ ldots, M + n-1 $, which can easily show the vertical edges (that is, the vertical boundary at the end of each line) the label is $ \ {\ lambda_ I + n-I, 1 \ Leq I \ Leq n \} $, the horizontal edge number is $ \ {n-1 + J-\ Lambda '_ j, 1 \ Leq J \ Leq m \} $. So we get the following conclusion:
Set $ \ {\ lambda_ I + n-I, 1 \ Leq I \ Leq n \} $ and $ \ {n-1 + J-\ Lambda '_ j, 1 \ Leq J \ Leq m \} $ combines to form a replacement of $ \ {0, 1, \ ldots, M + n-1 \} $.
Note the set $ \ {\ lambda_ I + n-I, 1 \ Leq I \ Leq n \} $ is the hook length of the first column of $ F _ \ Lambda $ at $ n = \ Lambda '_ 1 $. If we want to get the hook length of the first line, we only need to apply this conclusion to the $ F _ \ Lambda $'s constructor $ F _ {\ Lambda '} $, that is, (exchange the status of $ \ Lambda $ and $ \ Lambda '$)
Set $ \ {\ Lambda '_ j + M-J, 1 \ Leq J \ Leq m \} $ and $ \ M-1 + I-\ lambda_ I, 1 \ Leq I \ Leq n \} $ combines to form a replacement of $ \ {0, 1, \ ldots, M + n-1 \} $.
This is what we want: Set $ M = \ lambda_1 $, $ n = \ lambda_1 '= r$ to the number of rows for $ F _ \ Lambda $, set $ \ {\ Lambda '_ j + M-J, 1 \ Leq J \ Leq m \} $ is exactly the hook length of the square in the first line of $ F _ \ Lambda $, and the set $ M-1 + I-\ lambda_ I, 1 \ Leq I \ Leq n \} $ is exactly $ \ {\ mu_1-\ mu_j, 1 \ Leq J \ Leq r \} $, they are combined to form a replacement of $ \ {0, 1, \ ldots, \ mu_1 \} $ in the set, $ therefore, the product of the hook length of all squares in the first line of $ F _ \ Lambda $ is exactly the same as \ [\ frac {\ mu_1 !} {\ Prod _ {I = 2} ^ r (\ mu_1-\ mu_ I)}. \]
Then, erase the first line of $ F _ \ Lambda $ and use the same argument for the second line. The product of the hook length of all squares in the second line is \ [\ frac {\ mu_2 !} {\ Prod _ {I = 3} ^ r (\ mu_2-\ mu_ I)}. \]
In this way, we get the proof of the theorem.
Back to the proof of the hook formula: here the technique can be summarized as "think out of the box", considering the infinite number of variables $ x_1, \ ldots, X_n, \ ldots, $. This ring has a homomorphic in the form of a single-Variable Power Series Ring $ \ Theta $: \ [\ theta (f) = \ sum _ {k = 0} ^ n f_k \ frac {t ^ k} {K !}. \]
$ F_k $ is the coefficient of the monominal $ x_1x_2 \ cdots X_k $ in $ F $ (leave it to you to verify that this is indeed a homomorphic ). In particular, $ \ theta (h_k) =\ frac {t ^ k} {K !} $. Use $ \ Theta $ on both sides of the Markov-Trudi identity (this identity is also true for the infinite number of variable elements and proves the same ), \ [F _ \ Lambda \ frac {t ^ n} {n !} = \ Det \ left (\ frac {t ^ {\ lambda_i-i + J }}{ (\ lambda_i-i + J )!} \ Right). \]
Here, the left side is filled in $ x_1, \ ldots, X_n $ to obtain the semi-standard young table without repetition, $ n = | \ Lambda | $ and the standard young table is obtained.
Then, set $ T = 1 $ on both sides.
\ [F _ \ Lambda = n! \ Cdot \ det \ left (\ frac {1} {(\ lambda_i-i + J )!} \ Right). \]
Leave it to you to verify the expression that is equal to the value in the theorem on the right. This completes the proof of the hook formula.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~
Macmahon formula for plane splitting
Heap box problems can be described:
How many $ A \ times B $ matrices meet the following conditions?
All the elements in the matrix are non-negative integers, and each row is left to right. Each column is weakly decreasing from top to bottom;
The maximum (I .e., $ () $ position) element in the Matrix cannot exceed $ C $.
This problem is also called restricted plane split, which was first solved by macmahon in 1903. The answer is hard to guess:
\ [\ Prod _ {I = 1} ^ A \ prod _ {j = 1} ^ B \ prod _ {k = 1} ^ C \ frac {A + B + C-1} {A + B + C-2 }. \]
This is the famous macmahon formula. Today, this problem has been solved by a variety of solutions (RSK corresponds to Schur polynomials, non-Intersecting Lattice Path groups, etc.). The following solutions use Schur polynomials.
Set the number of plane splits using $ N $ boxes to $ a_n $. We need to calculate the generated function.
\ [F (q) = \ sum _ {n = 0} ^ {ABC} a_nq ^ N. \]
In particular, $ q = 1 $ is the total number of all plane splits that meet the restrictions.
Set $ \ Lambda = AA \ cdots a $ ($ B $ A $ ), consider the specified value of the $ B + C $ variable Schur polynomial $ S _ \ Lambda (Q ^ {B + c}, \ ldots, q) $, this happens when columns are strictly downgraded and rows are weakly downgraded, and the maximum element is no more than $ B + C $. All elements are the matrix generation functions of positive integers. If you subtract $ B $ from the first line, subtract $ B-1 $ from the second line ,..., if line $ B $ is subtracted from line 1, the row and column are all weakly downgraded. The maximum element cannot exceed $ C $. All elements are non-negative integer matrix generation functions, that is
\ [S _ \ Lambda (Q ^ {B + c}, \ cdots, q) = Q ^ {AB (B + 1)/2} f (Q). \]
To obtain the expression on the left, use the bialternant formula of the Schur polynomial:
\ [S _ \ Lambda (Q ^ {B + c}, \ cdots, q) =\ frac {\ det (Q ^ {(B + C + 1-I) (B + C-J + \ lambda_j)}) {\ det (Q ^ {(B + C + 1-I) (B + C-j )})}. \]
Here, when $ J = B + 1, \ ldots, B + C $, $ \ lambda_j $ is 0 by default. In the end, it is not difficult to find
\ [F (q) = \ prod _ {I = 1} ^ A \ prod _ {j = 1} ^ B \ prod _ {k = 1} ^ C \ frac {1-Q ^ {A + B + C-1 }{1-q ^ {A + B + C-2 }}. \]
This is the generation function of restricted plane splits. In particular, we get the macmahon formula above $ q = 1 $..
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~
References
1. MacDonald. Invalid Ric functions and Hall polynomials.
2. Sagan. Your Ric groups.
3. Aigner. A course in enumeration.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~
This article is relatively casual, with many details not clearly written, mainly lazy. However, the proof of the key theorem (mainly the idea) is carefully written and cannot be found in the literature. Therefore, I believe this article has its value.
Integrated mathematical roaming wonderland: Schur polynomial, hook length formula, macmahon plane splitting formula