Section 1:papoulis ' s Formula
Lemma 1: If the random variables $y _1,\ldots,y_n$ is the linear combination of random variables $x _1,\ldots,x_n$, say ${\left[y_1 , \ldots,y_n \right]^t} = a{\left[X_1,\ldots,x_n \right]^t}$, then
\begin{equation} \nonumber
H\left ({{y_1}, \ldots, {y_n}} \right) = H\left ({{x_1}, \ldots, {x_n}} \right) + \log \left| \det A \right|
\end{equation}
Lemma 2:If a function $H \left (z \right) = \sum\nolimits_{n = 0}^\infty {{h_n}{z^{-n}} $ is minimum-phase and $h _0 \ne 0$, the N
\begin{equation} \nonumber
\ln h_0^2 = \frac{1}{{2\pi}}\int_{-\pi}^\pi {\ln {{\left| {H\left ({{E^{j\omega}}} \right)} \right|} ^2}d\omega}
\end{equation}Remark:If $h _0 = 0,h_1 \ne 0$ and $H \left (z \right) $ is minimum-phase, then $zH \left (z \right) = \sum\nolimits_{n = 1}^\infty {{h_n}{z^{-n + 1}}} = {h_1} + {h_2}{z^{-1}} + \cdots $ is also minimum-phase. Thus, from lemma 2, we have
\begin{equation} \nonumber
\begin{aligned}
\ln h_1^2 &= \frac{1}{{2\pi}}\int_{-\pi}^\pi {\ln {{\left| {{E^{j\omega}}h\left ({{E^{j\omega}}} \right)} \right|} ^2}d\omega} \ \
&= \frac{1}{{2\pi}}\int_{-\pi}^\pi {\ln {\left| {H\left ({{E^{j\omega}}} \right)} \right|} ^2}d\omega}.
\end{aligned}
\end{equation} Similarly, if $H \left (z \right) $ is minimum-phase and $h _0 = \dots = h_{k-1} = 0,~h_k \ne 0$ for some posi tive integer $k $ (i.e., $k $ is the relative degree of the system), then $\ln h_k^2 = \frac{1}{{2\pi}}\int_{-\pi}^\pi {\ln {\left| {H\left ({{E^{j\omega}}} \right)} \right|} ^2}d\omega}$.
Lemma 3: Let the random vector $X \in \mathbb{r}^n$ has zero mean and covariance $K = exx^t$ (i.e., $K _{ij} = ex_ix_j,~1 \le i,j \ Le n$). Then $h (X) \le \frac{1}{2}\log {\left ({2\pi e} \right) ^n}\left| K \right|$, with equality iff $X \sim n\left ({0,k} \right) $.
theorem 1: If $H (z) $ is minimum-phase, then the entropy rate $\bar H (y) $ of the output $y _n$ is
\begin{equation} \nonumber
\bar h\left (y \right) = \bar H\left (r \right) + \frac{1}{{{\rm{2}}\pi}}\int_{-\pi}^\pi {\ln \left| {H\left ({{E^{j\omega}}} \right)} \right|d\omega}
\end{equation} where $\bar H\left (r \right) = \mathop {\lim}\limits_{n \to \infty} \frac{1}{n}h\left ({{r_0}, \cdots, { R_{n-1}}} \right) $ is the entropy rate of the input $r _n$.
Proof:Suppose that $H \left (z \right) = \sum\nolimits_{i = 0}^\infty {h_i}{z^{-i}} $ and $h _0 \ne 0$, then
\[\left[{\begin{array}{*{20}{c}}
{{y_0}}\\
{{y_1}}\\
\vdots \ \
{{y_{n-1}}}
\end{array}} \right] = \left[{\begin{array}{*{20}{c}}
{{h_0}}&0& \cdots &0\\
{{h_1}}&{{h_0}}& \cdots &0\\
\vdots & \vdots &{}& \vdots \ \
{{h_{n-1}}}&{{h_{n-2}}}& \cdots &{{h_0}}
\end{array}} \right]\left[{\begin{array}{*{20}{c}}
{{r_0}}\\
{{r_1}}\\
\vdots \ \
{{r_{n-1}}}
\end{array}} \right] \buildrel \delta \over = a_n\left[{\begin{array}{*{20}{c}}
{{r_0}}\\
{{r_1}}\\
\vdots \ \
{{r_{n-1}}}
\end{array}} \right]\] and $\det a_n = h_0^n$. From lemma 1 we have
\begin{equation} \nonumber
\begin{aligned}
H\left ({{y_0}, \ldots, {y_{n-1}}} \right) &= H\left ({{r_0}, \ldots, {r_{n-1}}} \right) + \log \left| \det A_n \right| \\
&= H\left ({{r_0}, \ldots, {r_{n-1}}} \right) + \log \left| {H_0^n} \right| \\
&= H\left ({{r_0}, \ldots, {r_{n-1}}} \right) + N\log \left| {{H_0}} \right|
\end{aligned}
\end{equation} and
\begin{equation} \label{hy_hr_logh0}
\bar h\left (y \right) = \bar H\left (r \right) + \log \left| {{H_0}} \right|
\end{equation} which, combines with lemma 2, complete the proof.
Remark:If $h _0 = 0$ but $h _1 \ne 0$, we are
\[\left[{\begin{array}{*{20}{c}}
{{y_1}}\\
{{y_2}}\\
\vdots \ \
{{Y_n}}
\end{array}} \right] = \left[{\begin{array}{*{20}{c}}
{{h_1}}&0& \cdots &0\\
{{h_2}}&{{h_1}}& \cdots &0\\
\vdots & \vdots &{}& \vdots \ \
{{h_n}}&{{h_{n-1}}}& \cdots &{{h_1}}
\end{array}} \right]\left[{\begin{array}{*{20}{c}}
{{r_0}}\\
{{r_1}}\\
\vdots \ \
{{r_{n-1}}}
\end{array}} \right] \buildrel \delta \over = {\bar a_n}\left[{\begin{array}{*{20}{c}}
{{r_0}}\\
{{r_1}}\\
\vdots \ \
{{r_{n-1}}}
\end{array}} \right]\] and
\begin{equation} \LABEL{HY_HR_H1}
\begin{aligned}
H\left ({{y_1}, \ldots, {y_n}} \right) &= H\left ({{r_0}, \ldots, {r_{n-1}}} \right) + \log \left| {\det {{\bar a}_n}} \right| \\
&= H\left ({{r_0}, \ldots, {r_{n-1}}} \right) + N\log \left| {{h_1}} \right|
\end{aligned}
\end{equation} However, since $ h\left ({{y_{\rm{0}}, \ldots, {y_n}} \right) = H\left ({{y_1}, \ldots, {y_n}} \right) {\RM { + }} H\left ({{y_0}\left| {{y_1}, \ldots, {y_n}} \right.} \right) $, we get
\begin{equation} \nonumber
\begin{aligned}
\frac{{h\left ({{y_1}, \ldots, {y_n}} \right)}}{n} &= \frac{{h\left ({{y_{\rm{0}}, \ldots, {y_n}} \right)-H\left ({ {y_0}\left| {{y_1}, \ldots, {y_n}} \right.} \right)}}{n} \ \
&= \frac{{h\left ({{y_{\rm{0}}, \ldots, {y_n}} \right)}}{{n + 1}} \cdot \frac{{n + 1}}{n}-\frac{{h\left ({{Y_0}\lef T| {{y_1}, \ldots, {y_n}} \right.} \right)}}{n}
\end{aligned}
\end{equation} which implies that \[\mathop {\lim}\limits_{n \to \infty} \frac{{h\left ({{y_1}, \ldots, {y_n}} \right)}} {n} = \bar h\left (y \right). \]therefore, divides (\ref{hy_hr_h1}) by $n $ and let $n $ tends to infinity, we can get
\begin{equation} \nonumber
\bar h\left (y \right) = \bar H\left (r \right) + \log \left| {{h_1}} \right|
\end{equation} which, combines with the remark of lemma 2, implies this theorem 1 holds for all minimum-phase systems not Only for that $h _0 \ne 0$.
Section 2:papoulis ' s Formula for NON-MP System
Suppose that a function $H \left (z \right) = \sum\nolimits_{n = 0}^\infty {{h_n}{z^{-n}}} $ are stable (Minimum-phase is unneeded), $k $ is the relative degree of this system such that $h _0 = \ldots = h_{k-1} = 0$ and $h _k \ne 0$. By Cauchy's residual theorem (in fact, inverse $Z – $transformation), we have
\[\frac{1}{{2\pi i}}\int_{\left| Z \right| = 1} {h\left (z \right) {Z^{k-1}}dz} = {H_k}\] or
\begin{equation} \nonumber
\frac{1}{{2\pi}}\int_{-\pi}^\pi {h\left ({{E^{j\omega}}} \right) {E^{jk\omega}}d\omega} = {H_k}.
\end{equation} then theorem 1 can be rewritten as \[\bar h\left (y \right) = \bar H\left (r \right) + \ln \left| {\frac{1}{{2\pi}}\int_{\left| Z \right| = 1} {h\left ({{E^{j\omega}}} \right) {E^{jk\omega}}d\omega}} \right|. \] On the other hand, each (Stable) transfer function can be written as $ $H \left (z \right) = {H_{{\rm{mp}}}}\left (z \rig HT) {h_{{\rm{ap}}}}\left (z \right) = {z_{{u_i}}} \cdots {Z_{{u_l}}}\bar h\left (z \right) {h_{{\rm{ap}}}}\left (z \right) $ $ where ${{z_{{u_i}}},\ldots,{z_{{u_l}}}}$ is the unstable zeros of $H (z) $ and $\bar H (z) $ is minimum-phase and have the F Irst Laurent series coefficient same as $H (z) $. That is, $\bar H (z) = {H_0} + {\bar h_1}{z^{-1}} + \cdots $ iff $H (z) = {H_0} + {h_1}{z^{-1}} + \cdots $. Therefore, we have
\begin{equation} \nonumber
\begin{aligned}
\frac{1}{{2\pi}}\int_{-\pi}^\pi {\ln {{\left| {H\left ({{E^{j\omega}}} \right)} \right|} ^2}d\omega} \ \
&= \frac{1}{{2\pi}}\int_{-\pi}^\pi {\ln {\left| {{z_{{u_i}}} \cdots {Z_{{u_l}}}\bar h\left ({{E^{j\omega}}} \right) {H_{{\rm{ap}}}}\left ({{E^{j\omega}}} \right)} \ Right|} ^2}d\omega} \ \
&= \frac{1}{{2\pi}}\int_{-\pi}^\pi {\ln {\left| {{z_{{u_i}}} \cdots {Z_{{u_l}}}\bar h\left ({{E^{j\omega}}} \right)} \right|} ^2}d\omega} \ \
&= \sum\limits_{i = 1}^l {\ln {{\left| {{z_{{u_i}}} \right|} ^2}} + \frac{1}{{2\pi}}\int_{-\pi}^\pi {\ln {{\left| {\bar H\left ({{E^{j\omega}}} \right)} \right|} ^2}d\omega} \ \
&= \sum\limits_{i = 1}^l {\ln {{\left| {{z_{{u_i}}} \right|} ^2}} + \ln h_0^2
\end{aligned}
\end{equation} or simply, \[\ln \left| {{H_0}} \right| = \frac{1}{{2\pi}}\int_{-\pi}^\pi {\ln \left| {H\left ({{E^{j\omega}}} \right)} \right|d\omega}-\sum\limits_{i = 1}^l {\ln \left| {{z_{{u_i}}} \right|} . \] Finally, by equation (\ref{hy_hr_logh0}), we get \[\bar h\left (y \right) = \bar H\left (r \right) + \frac{1}{{2\pi}} \int_{-\pi}^\pi {\ln \left| {H\left ({{E^{j\omega}}} \right)} \right|d\omega}-\sum\limits_{i = 1}^l {\ln \left| {{z_{{u_i}}} \right|} \] If the system is stable only where $z _{u_i}$ 's is the unstable zeros.
Section 3:extended Case for 1st-order System
Consider the following stable discrete-time linear system with state equation
\begin{equation} \nonumber
\begin{aligned}
{x_{k + 1}} &= A{x_k} + b{u_k}\\
{y_k} &= C{x_k}.
\end{aligned}
\end{equation} The initial state ${x_0}\sim N\left ({0,\sigma _0^2i} \right) $. By simple computation, we have ${x_{k + 1}} = {a^{k + 1}}{x_0} + \sum\nolimits_{i = 0}^k {{a^{k-i}}b{u_i}} $ and ${y_k} = C{a^{k + 1}}{x_0} + \sum\nolimits_{i = 0}^k {c{a^{k-i}}b{u_i}} $. Then
\begin{equation} \label{hy_inequal}
\begin{aligned}
&h\left ({{y_0}, \ldots, {y_{n-1}}\left| {{u_0}, \ldots, {u_{n-1}}} \right.} \right) \ \
&= h\left ({c{x_0}, \ldots, c{a^{n-1}}{x_0} + \sum\limits_{i = 0}^{n-2} {c{a^{n-2-i}}b{u_i}} \left| {{u_0}, \ldots, {u_{n-1}}} \right.} \right) \ \
&= h\left ({c{x_0}, \ldots, c{a^{n-1}}{x_0}\left| {{u_0}, \ldots, {u_{n-1}}} \right.} \right) \ \
& \le H\left ({c{x_0}, \ldots, c{a^{n-1}}{x_0}} \right) \ \
& \le \sum\limits_{i = 0}^{n-1} {h\left ({c{a^i}{x_0}} \right)}.
\end{aligned}
\end{equation} Since ${x_0}\sim n\left ({0,\sigma _0^2i} \right) $, the covariance matrix of $cA ^ix_0$ is
\begin{equation} \nonumber
\begin{aligned}
{k_i} &= \varepsilon \left\{{c{a^i}{x_0}{{\left ({c{a^i}{x_0}} \right)}^t}} \right\} \ \
&= C{a^i}\varepsilon \left\{{{x_0}{x_0}^t} \right\}{\left ({{a^i}} \right) ^t}{c^t} \ \
&= C{a^i}\sigma _0^2{\left ({{a^i}} \right) ^t}{c^t} \ \
&= \sigma _0^2{\left\| {c{a^i}} \right\|^2}.
\end{aligned}
\end{equation} Therefore, by lemma \ref{maximum_entropy}, we have
\begin{equation} \label{hcax0_inequal}
\begin{aligned}
H\left ({c{a^i}{x_0}} \right) &\le \frac{1}{2}\log \left ({2\pi e\sigma _0^2{{\left\| {c{a^i}} \right\|} ^2}} \right) \ \
&\le \frac{1}{2}\log \left ({2\pi e\sigma _0^2{{\left\| c \right\|} ^2}{{\left\| A \right\|} ^{2i}}} \right) \ \
&= \frac{1}{2}\log \left ({2\pi e\sigma _0^2{{\left\| c \right\|} ^2}} \right) + I\log \left\| A \right\|
\end{aligned}
\end{equation} where $\left\| C \right\|$ and $\left\| A \right\|$ refers, respectively, to the $l _2-$norm of the vector $c $ and The matrix $A $ (i.e., $\left\| c \right\| = \sum \nolimits_{i = 1}^n {{{\left| {{c_i}} \right|} ^2}},~ \left\| A \right\| = {\lambda _{\max}}\left (A \right) $), and the second inequality in (\ref{hcax0_inequal}) follows that $\left\| {Ac} \right\| \le \left\| A \right\|\left\| C \right\|$.
Now substitute (\ref{hcax0_inequal}) into (\ref{hy_inequal}), we can get
\begin{equation} \nonumber
\begin{aligned}
&h\left ({{y_0}, \ldots, {y_{n-1}}\left| {{u_0}, \ldots, {u_{n-1}}} \right.} \right) \ \
&\le \sum\limits_{i = 0}^{n-1} {h\left ({c{a^i}{x_0}} \right)} \ \ \
& \le \sum\limits_{i = 0}^{n-1} {\left {\frac{1}{2}\log \left ({2\pi e\sigma _0^2{{\left\| c \right\|} ^2}} \right) + I\log \left\| A \right\|} \right)} \ \
&= \frac{n}{2}\log \left ({2\pi e\sigma _0^2{{\left\| c \right\|} ^2}} \right) + \frac{{n\left ({n-1} \right)}}{2}\log \left\| A \right\|
\end{aligned}
\end{equation} and
\begin{equation} \label{iyu_ineuqal}
\begin{aligned}
&\frac{{i\left ({{y_0}, \ldots, {y_{n-1}};{ U_0}, \ldots, {u_{n-1}}} \right)}}{n} \ \
&= \frac{{h\left ({{y_0}, \ldots, {y_{n-1}}} \right)-H\left ({{y_0}, \ldots, {y_{n-1}}\left| {{u_0}, \ldots, {u_{n-1}}} \right.} \right)}}{n} \ \
&\ge \frac{{h\left ({{y_0}, \ldots, {y_{n-1}}} \right)}}{n}-\frac{1}{2}\log \left ({2\pi e\sigma _0^2{{\left\| c \ right\|} ^2}} \right) + \frac{{\left ({n-1} \right)}}{2}\log \frac{1}{{\left\| A \right\|}}
\end{aligned}
\end{equation} However, since the system we considered is stable, $\left\| A \right\| < 1$ or $\log \frac{1}{{\left\| A \right\|}} > 0$. Taking $n \to \infty $ in (\ref{iyu_ineuqal}) We'll get $\bar i\left ({y;u} \right) = \infty $, which is coincide with T He 1st-order case.
Report, 20150402, formulas on Entropy, part I