Beta distribution and Dirichlet distribution

Source: Internet
Author: User

How is Gamma function discovered? It proves that \ begin {Align *} B (m, n) = \ int_0 ^ 1 x ^ {M-1} (32a) ^ {n-1} \ Text {d} X = \ frac {\ gamma (m) \ gamma (n)} {\ gamma (m + n )} \ end {Align *} So \ begin {Align *} f _ {m, n} (X) =\begin {cases} \ frac {x ^ {M-1} (32a) ^ {n-1 }}{ B (m, n )} = \ frac {\ gamma (m + n) }{\ gamma (m) \ gamma (n)} x ^ M-1) ^ {n-1} & 0 \ Leq x \ Leq 1 \ 0 & \ Text {Other cases} \ end {Align *}: $ F _ {M, n} (x) $ points are $1 $, that is$ F _ {m, n} (x) $ corresponds to a probability distribution.Because the denominator of this function is $ beta $, we generally call it the corresponding distribution$ Beta $ distribution with the parameter $ M and N $.

The following describes the Digital features of the distribution, yi Zhi's $ K $ moment of order is \ begin {Align *} e [x ^ K] = \ int_0 ^ 1 x ^ K f _ {m, n} (X) \ Text {d} X = \ int_0 ^ 1 \ frac {x ^ {M + k-1} (_x) ^ {n-1} {B (m + K, n)} \ frac {B (m + k, n)} {B (m, n)} \ Text {d} X = \ frac {\ gamma (m + k) \ gamma (m + n) }{\ gamma (m) \ gamma (m + K + n )} \ end {Align *} So \ begin {Align *} e [x] = \ frac {\ gamma (m + 1) \ gamma (m + n )} {\ gamma (m) \ gamma (m + 1 + n) }=\ frac {m} {m + n }, \ e [x ^ 2] = \ frac {\ gamma (m + 2) \ gamma (m + n)} {\ gamma (m) \ gamma (m + 2 + n) }=\ frac {(m + 1) m} {(m + n + 1) (m + n )} \ end {Align *} the mean and variance are \ begin {Align *} e [x] = \ frac {m} {m + n}, respectively }, \ D [x] = \ frac {(m + 1) m} {(m + n + 1) (m + n )} -\ left (\ frac {m} {m + n} \ right) ^ 2 = \ frac {Mn} {(m + n + 1) (m + n) ^ 2} \ end {Align *}

$ Beta $ functions are binary functions, which can be extended into the following $ k + 1 (k \ geq 2) $ yuan form: \ begin {Align} \ label {EQ: multivariate beta function} B (M_1, \ cdots, M _ {k + 1 }) = \ int_0 ^ 1 X_1 ^ {m_1-1} \ int_0 ^ {} X_2 ^ {m_2-1} \ cdots \ int_0 ^ {-\ cdots-X _ {k-1} X_k ^ {m_k-1} (1-X_1-\ cdots-X_k) ^ {M _ {k + 1}-1} \ Text {d} X_1 \ Text {d} X_2 \ cdots \ Text {d} X_k \ end {Align} attention (\ ref {EQ: multivariate beta function}) is a $ K $ multi-point, which is used to evaluate the core of $ X_k $, that is, \ begin {Align *} e_k (M_k, M _ {k + 1 }) = \ int_0 ^ {1-x_1-\ cdots-X _ {k-1} X_k ^ {m_k-1} (1-X_1-\ cdots-X_k) ^ {M _ {k + 1}-1} \ Text {d} X_k = \ int_0 ^ t X_k ^ {m_k-1} (t-X_k) ^ {M _ {k + 1}-1} \ Text {d} X_k \ end {Align *} where $ t =-\ cdots-X _ {k-1} $. The \ begin {Align *} e_k (m_k, M _ {k + 1}) & =\ int_0 ^ T (t-X_k) ^ {M _ {k + 1}-1} \ Text {d} \ frac {X_k ^ {m_k} \ & = (t-X_k) ^ {M _ {k + 1}-1} \ frac {X_k ^ {m_k} | _ 0 ^ t-\ int_0 ^ t \ frac {X_k ^ {m_k }}{ m_k} (M _ {k + 1}-1) (T-X_k) ^ {M _ {k + 1}-2} (-1) \ Text {d} X_k \ & =\ frac {M _ {k + 1}-1} {m_k} e_k (m_k + 1, M _ {k + 1}-1) \ end {Align *} So the recursion goes down with \ begin {Align *} e_k (m_k, M _ {k + 1 }) & =\ frac {M _ {k + 1}-1} {m_k} e_k (m_k + 1, M _ {k + 1}-1 )\ \ & =\ Frac {M _ {k + 1}-1} {m_k} \ frac {M _ {k + 1}-2} {m_k + 1} e_k (m_k + 2, M _ {k + 1}-2) \\& =\ cdots \\\& =\ frac {M _ {k + 1}-1} {m_k} \ cdots \ frac {1} {m_k + M _ {k + 1}-2} e_k (m_k + M _ {k + 1}-1, 1) \ end {Align *} And \ begin {Align *} e_k (m_k + M _ {k + 1}-1, 1) = \ int_0 ^ t X_k ^ {m_k + M _ {k + 1}-2} \ Text {d} X_k = \ frac {X_k ^ {m_k + M _ {k + 1}-1 }}{ m_k + M _ {k + 1}-1} | _ 0 ^ t = \ frac {t ^ {m_k + M _ {k + 1} -1 }}{ m_k + M _ {k + 1}-1} \ end {Align *} So \ begin {Align *} e_k (m_k, M _ {k + 1}) =\ frac {\ gamma (M _ {k + 1}) \ gamma (m_k )} {\ gamma (M _ {k + 1} + m_k)} (-\ cdots-X _ {k-1 }) ^ {m_k + M _ {k + 1}-1} \ end {Align *} replace it (\ ref {EQ: Multivariate beta function }), next we examine the points \ begin {Align *} e _ {k-1} (M _ {k-1 }, m_k + M _ {k + 1 }) & =\ int_0 ^ {1-x_1-\ cdots-X _ {K-2} X _ {k-1} ^ {M _ {k-1}-1} \ frac {\ gamma (M _ {k + 1 }) \ gamma (m_k)} {\ gamma (M _ {k + 1} + m_k)} (1-x_1-\ cdots- X _ {k-1 }) ^ {m_k + M _ {k + 1}-1} \ Text {d} X _ {k-1} \ & =\ frac {\ gamma (M _ {k + 1 }) \ gamma (m_k) }{\ gamma (M _ {k + 1} + m_k )} \ int_0 ^ t x _ {k-1} ^ {M _ {k-1}-1} (t-X _ {k-1 }) ^ {m_k + M _ {k + 1}-1} \ Text {d} X _ {k-1} \ end {Align *} where $ t =-\ cdots-x _ {K-2} $. So continue to follow the previous method (recurrence after the division points) can get \ begin {Align *} e _ {k-1} (M _ {k-1 }, m_k + M _ {k + 1}) & =\ frac {\ gamma (M _ {k + 1}) \ gamma (m_k )} {\ gamma (M _ {k + 1} + m_k)} \ frac {\ gamma (M _ {k + 1} + m_k) \ gamma (M _ {k-1})} {\ gamma (M _ {k + 1} + m_k + M _ {k-1 })} (-\ cdots-X _ {K-2 }) ^ {M _ {k + 1} + m_k + M _ {k-1}-1} \ & =\ frac {\ gamma (M _ {k + 1 }) \ gamma (m_k) \ gamma (M _ {k-1})} {\ gamma (M _ {k + 1} + m_k + M _ {k-1 })} (-\ cdots-X _ {K-2 }) ^ {M _ {k + 1} + m_k + M _ {k-1}-1} \ end {Align *} repeats this process and we can see \ begin {Align} \ label {EQ: e2} E_2 (M_2, M _ {k + 1} + m_k + \ cdots + M_3) = \ frac {\ gamma (M _ {k + 1 }) \ gamma (m_k) \ cdots \ gamma (M_2) }{\ gamma (M _ {k + 1} + m_k + \ cdots + M_2) ^ {M _ {k + 1} + m_k + \ cdots + M_2-1} \ end {Align} the final points for $ X_1 $ are \ begin {Align *} B (M_1, \ cdots, M _ {k + 1}) & =\ int_0 ^ 1 X_1 ^ {m_1-1} \ frac {\ gamma (M _ {k + 1 }) \ gamma (m_k) \ cdots \ gamma (M_2) }{\ gamma (M _ {k + 1} + m_k + \ cdots + M_2) ^ {M _ {k + 1} + m_k + \ cdots + M_2-1} \ Text {d} X_1 \\\&=\ frac {\ gamma (M _ {k + 1 }) \ gamma (m_k) \ cdots \ gamma (M_2) }{\ gamma (M _ {k + 1} + m_k + \ cdots + M_2 )} \ frac {\ gamma (M _ {k + 1} + m_k + \ cdots + M_2) \ gamma (M_1 )} {\ gamma (M _ {k + 1} + m_k + \ cdots + M_1 )} 1 ^ {M _ {k + 1} + m_k + \ cdots + M_1-1 }\\&=\ frac {\ gamma (M _ {k + 1 }) \ gamma (m_k) \ cdots \ gamma (M_1) }{\ gamma (M _ {k + 1} + m_k + \ cdots + M_1 )} \ end {Align *} order $ \ boldsymbol {m} = [M_1, \ cdots, M _ {k + 1}] $, $ \ boldsymbol {x} = [x_1, \ cdots, X _ {k + 1}] $ and define \ begin {Align *} f _ {\ boldsymbol {m} (\ boldsymbol {x }) =\begin {cases} \ frac {\ gamma (M _ {k + 1} + m_k + \ cdots + M_1) }{\ gamma (M _ {k + 1 }) \ gamma (m_k) \ cdots \ gamma (M_1 )} \ prod _ {I = 1} ^ {k + 1} X_ I ^ {m_ I-1} & \ sum _ {I = 1} ^ {k + 1} X_ I = 1 \ \ 0 & \ Text {Other cases} \ end {Align *} note that this is a $ K $ variable function (and the limit of $1 $ ), from the above derivation, we can see that $ K $ of $ F _ {\ boldsymbol {m} (\ boldsymbol {x}) $ is $1 $, therefore, $ F _ {\ boldsymbol {m} (\ boldsymbol {x}) $ also corresponds to a probability distribution, and our corresponding distribution is$ Dirichlet $ distribution of $ \ boldsymbol {m} $.

The following is a brief description of the Digital features of the distribution. It is easy to know \ begin {Align *} x_j ^ n f _ {\ boldsymbol {m} (\ boldsymbol {x }) & =\ frac {\ gamma (M _ {k + 1} + \ cdots + M_1)} {\ gamma (M _ {k + 1 }) \ cdots \ gamma (M_1 )} x_j ^ n \ prod _ {I = 1} ^ {k + 1} X_ I ^ {m_ I-1} \\&=\ frac {\ gamma (M _ {k + 1} + \ cdots + M_1 )} {\ gamma (M _ {k + 1} + \ cdots + m_j + N + \ cdots + M_1)} \ frac {\ gamma (m_j + n )} {\ gamma (m_j)} \ frac {\ gamma (M _ {k + 1} + \ cdots + m_j + N + \ cdots + M_1 )} {\ gamma (M _ {k + 1}) \ cdots \ gamma (m_j + n) \ cdots \ gamma (M_1 )} x_j ^ n \ prod _ {I = 1} ^ {k + 1} X_ I ^ {m_ I-1} \ end {Align *} Then
\ Begin {Align *} e [x_j] & =\ frac {\ gamma (M _ {k + 1} + \ cdots + M_1 )} {\ gamma (M _ {k + 1} + \ cdots + m_j + 1 + \ cdots + M_1)} \ frac {\ gamma (m_j + 1 )} {\ gamma (m_j )} = \ frac {m_j} {M _ {k + 1} + \ cdots + M_1} \ e [x_j ^ 2] & =\ frac {\ gamma (M _ {k + 1} + \ cdots + M_1 )} {\ gamma (M _ {k + 1} + \ cdots + m_j + 2 + \ cdots + M_1)} \ frac {\ gamma (m_j + 2 )} {\ gamma (m_j) }=\ frac {(m_j + 1) m_j} {(M _ {k + 1} + \ cdots + M_1 + 1) (M _ {k + 1} + \ cdots + M_1 )} \ end {Align *} the mean and variance are \ begin {Align *} e [x] & =\ frac {m_j} {M _ {k + 1} + \ cdots + M_1} \ D [x] & =\ frac {(m_j + 1) m_j} {(M _ {k + 1} + \ cdots + M_1 + 1) (M _ {k + 1} + \ cdots + M_1 )} -\ left (\ frac {m_j} {M _ {k + 1} + \ cdots + M_1} \ right) ^ 2 = \ frac {m_j (M _ {k + 1} + \ cdots + M_1-m_j )} {(M _ {k + 1} + \ cdots + M_1 + 1) (M _ {k + 1} + \ cdots + M_1) ^ 2} \ end {Align *} And \ begin {Align *} x_p x_q F _ {\ boldsymbol {m} (\ boldsymbol {x }) & =\ frac {\ gamma (M _ {k + 1} + \ cdots + M_1)} {\ gamma (M _ {k + 1 }) \ cdots \ gamma (M_1 )} x_p x_q \ prod _ {I = 1} ^ {k + 1} X_ I ^ {m_ I-1 }\\&=\ frac {\ gamma (M _ {k + 1} + \ cdots + M_1 )} {\ gamma (M _ {k + 1} + \ cdots + m_j + 2 + \ cdots + M_1)} \ frac {\ gamma (m_p + 1 )} {\ gamma (m_p)} \ frac {\ gamma (m_q + 1)} {\ gamma (m_q )} \ frac {\ gamma (M _ {k + 1} + \ cdots + m_j + 2 + \ cdots + M_1) }{\ gamma (M _ {k + 1 }) \ cdots \ gamma (m_p + 1) \ cdots \ gamma (m_q + 1) \ cdots \ gamma (M_1 )} x_p x_q \ prod _ {I = 1} ^ {k + 1} X_ I ^ {m_ I-1} \ end {Align *} So \ begin {Align *} e [x_p x_q] =\ frac {\ gamma (M _ {k + 1} + \ cdots + M_1 )} {\ gamma (M _ {k + 1} + \ cdots + m_j + 2 + \ cdots + M_1)} \ frac {\ gamma (m_p + 1 )} {\ gamma (m_p)} \ frac {\ gamma (m_q + 1)} {\ gamma (m_q )} = \ frac {m_p m_q} {(M _ {k + 1} + \ cdots + M_1 + 1) (M _ {k + 1} + \ cdots + M_1 )} \ end {Align *} the covariance is \ begin {Align *} Cov (x_p, x_q) & = E [x_p x_q]-E [x_p] E [x_q] \\\&=\ frac {m_p m_q} {(M _ {k + 1} + \ cdots + M_1 + 1) (M _ {k + 1} + \ cdots + M_1 )} -\ frac {m_p} {M _ {k + 1} + \ cdots + M_1} \ frac {m_q} {M _ {k + 1} + \ cdots + M_1 }\ \ & =\ frac {-m_p m_q} {(M _ {k + 1} + \ cdots + M_1 + 1) (M _ {k + 1} + \ cdots + M_1) ^ 2} \ end {Align *}

Formula (\ ref {EQ: E2}) zhi \ begin {Align *} p (X_1 = T) & = t ^ {M_1-1} \ frac {\ gamma (M _ {k + 1}) \ gamma (m_k) \ cdots \ gamma (M_2 )} {\ gamma (M _ {k + 1} + m_k + \ cdots + M_2)} (1-T) ^ {M _ {k + 1} + m_k + \ cdots + M_2-1 }\\&=\ frac {\ gamma (M _ {k + 1 }) \ gamma (m_k) \ cdots \ gamma (M_1) }{\ gamma (M_1) \ gamma (M _ {k + 1} + m_k + \ cdots + M_1-M_1 )} t ^ {M_1-1} (1-T) ^ {M _ {k + 1} + m_k + \ cdots + M_1-M_1-1} \ end {Align *} can be seen by symmetry
\ Begin {Align *} p (x_ I = T) = \ frac {\ gamma (M _ {k + 1}) \ gamma (m_k) \ cdots \ gamma (M_1 )} {\ gamma (m_ I) \ gamma (M _ {k + 1} + m_k + \ cdots + M_1-m_ I)} t ^ {m_ I-1} (1-T) ^ {M _ {k + 1} + m_k + \ cdots + M_1-m_ I-1} \ end {Align *} This means$ Dirichlet $ the marginal distribution of the variable $ X_ I $ is the $ beta $ distribution of the parameter $ m_ I, M _ {k + 1} + m_k + \ cdots + M_1-m_ I $.

Beta distribution and Dirichlet distribution

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.