Skip to main content
Log in

Stabilizability of minimum-phase systems with colored multiplicative uncertainty

  • Research Article
  • Published:
Control Theory and Technology Aims and scope Submit manuscript

Abstract

This work addresses the mean-square stability and stabilizability problem for minimum-phase multi-input and multi-output (MIMO) plant with a novel colored multiplicative feedback uncertainty. The proposed uncertainty is generalization of the i.i.d. multiplicative noise and assumed to be a stochastic system with random finite impulse response (FIR), which has advantage on modeling a class of network phenomena such as random transmission delays. A concept of coefficient of frequency variation is developed to characterize the proposed uncertainty. Then, the mean-square stability for the system is derived, which is a generalization of the well-known mean-square small gain theorem. Based on this, the mean-square stabilizability condition is established, which reveals the inherent connection between the stabilizability and the plant’s unstable poles and the coefficient of frequency variation of the uncertainty. The result is verified by a numerical example on the stabilizability of a networked system with random transmission delay as well as analog erasure channel.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Willems, J., & Blankenship, G. (1971). Frequency domain stability criteria for stochastic systems. IEEE Transactions on Automatic Control, 16(4), 292–299.

    Article  MathSciNet  Google Scholar 

  2. Lu, J., & Skelton, R. E. (2002). Mean-square small gain theorem for stochastic control: discrete-time case. IEEE Transactions on Automatic Control, 47(3), 490–494.

    Article  MathSciNet  Google Scholar 

  3. Elia, N. (2005). Remote stabilization over fading channels. Systems & Control Letters, 54(3), 237–249.

    Article  MathSciNet  Google Scholar 

  4. Qi, T., Chen, J., Su, W., & Fu, M. (2017). Control under stochastic multiplicative uncertainties: part I, fundamental conditions of stabilizability. IEEE Transactions on Automatic Control, 62(3), 1269–1284.

    Article  MathSciNet  Google Scholar 

  5. Martin, D.N., & Johnson, T.L. (1975). Stability criteria for discrete-time systems with colored multiplicative noise. In: Proceedings of IEEE Conference on Decision and Control Including the 14th Symposium on Adaptive Processes, pp. 169–175. Clearwater, FL, USA.

  6. Willems, J. L. (1975). Stability criteria for stochastic systems with colored multiplicative noise. Acta Mechanica, 23(3–4), 171–178.

    Article  MathSciNet  Google Scholar 

  7. Li, H., Xu, J., & Zhang, H. (2019). Optimal control problem for discrete-time systems with colored multiplicative noise. In: Proceedings of IEEE 15th International Conference on Control and Automation, pp. 231–235. Edinburgh, Scotland.

  8. Li, H., Xu, J., & Zhang, H. (2020). Linear quadratic regulation for discrete-time systems with input delay and colored multiplicative noise. Systems & Control Letters, 143, 104740.

    Article  MathSciNet  Google Scholar 

  9. Su, W., Lu, J., & Li, J. (2017). Mean-square stabilizability of a SISO linear feedback system over a communication channel with random delay. In: Proceedings of Chinese Automation Congress, pp. 6965–6970. Jinan, China.

  10. Li, J., Lu, J., & Su, W. (2018). Stability and stabilizability of a class of discrete-time systems with random delay. In: Proceedings of 37th Chinese Control Conference, pp. 1331–1336. Wuhan, China. https://doi.org/10.23919/ChiCC.2018.8483945.

  11. Stewart, W. J. (2009). Probability, Markov Chains, Queues, and Simulation: the Mathematical Basis of Performance Modeling. Princeton: Princeton University Press.

    Book  Google Scholar 

  12. Chen, T., & Francis, B. A. (1998). Optimal sampled-data control systems. Communications & Control Engineering, 86(6), 1293–1294.

    Google Scholar 

  13. Xiao, N., Xie, L., & Qiu, L. (2012). Feedback stabilization of discrete-time networked systems over fading channels. IEEE Transactions on Automatic Control, 57(9), 2176–2189.

    Article  MathSciNet  Google Scholar 

  14. Li, L., & Zhang, H. (2016). Stabilization of discrete-time systems with multiplicative noise and multiple delays in the control variable. SIAM Journal on Control and Optimization, 54(2), 894–917.

    Article  MathSciNet  Google Scholar 

  15. Papoulis, A., & Pillai, S. U. (1984). Probability, Random Variables, and Stochastic Processes. New York: McGraw-Hill.

    MATH  Google Scholar 

  16. Zhou, K., Doyle, J.C., & Glover, K. (1995). Robust and Optimal Control. Pearson.

  17. Horn, R. A., & Johnson, C. R. (1986). Matrix Analysis. New York: Cambridge University Press.

    Google Scholar 

  18. Chen, J. (2000). Integral constraints and performance limits on complementary sensitivity: discrete-time systems. Systems & Control Letters, 39(1), 45–53.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jieying Lu.

Additional information

This work was supported by the National Natural Science Foundation of China (Nos. 61933006 and 61673183)

Appendices

Appendix A Proof of Lemma 1

It follows from Assumption 1 that \(\tilde{\varvec{\omega }}(k, l) \equiv 0\) for \(k<l\) and \(k> l+{\bar{\tau }}\). Therefore, \(r_{l}(\lambda )=0\) for \(\lambda > {\bar{\tau }}\). For the case \(0 \le \lambda \le {{\bar{\tau }}}\), it holds that

$$\begin{aligned} r_{l}(\lambda )&= \textstyle \sum \limits _{j=0}^{{\bar{\tau }}-\lambda } {\mathrm {E}}\{\tilde{\varvec{\omega }}({l+j+\lambda ,l}) \tilde{\varvec{\omega }}^*({l+j,l})\}\\&= \textstyle \sum \limits _{j=0}^{{\bar{\tau }}-\lambda }\varvec{\beta }_{(j+\lambda )j}, \end{aligned}$$

which is independent of l. On the other hand, it is easy to see that \(r_{l}(\lambda ) = r_{l}(-\lambda )\) for \(\lambda < 0\) by Change of Variables.

Appendix B Proof of Theorem 1

To see the boundedness of the variances of u(k) and \(\varepsilon (k)\), it suffices to show \(\Vert u_i\Vert _{\varvec{v}}\) and \(\Vert \varepsilon _i\Vert _{\varvec{v}}\) are finite for \(i=1,\ldots ,m\), where \(u_i\) and \(\varepsilon _i\) are the ith element of u and \(\varepsilon \), respectively, and \(\Vert u_i\Vert _{\varvec{v}}^2 = \lim _{k\rightarrow \infty }\mathrm {E}\{u_i^2(k)\}\).

Let \(G_i\) and \(g_i\) be the i-th row of G(z) and g(k), respectively. Since the input sequence \(\left\{ v(k)\right\} \) is a white noise vector process, it is easy to get

$$\begin{aligned}&\mathrm {E}\{ {u_i^2(k)}\} \nonumber \\&\quad = \mathrm {E}\Big \{ \textstyle \sum \limits _{{l_1} = 0}^{\infty } {{g_i}( {{l_1}} )( {v( {{k - l_1}} ) + d( {{k - l_1}} )} )} \nonumber \\&\qquad \times {{\Big [ {\textstyle \sum \limits _{{l_2} = 0}^{\infty } {{g_i}( {{l_2}} )( {v( {{k - l_2}} ) + d( {{k - l_2}} )} )} } \Big ]}^*} \Big \} \nonumber \\&\quad ={\textstyle \sum \limits _{{l_1} = 0}^{\infty } {{g_i}( {{l_1}} )\varSigma _v g_i^*( {{l_1}} )} } \nonumber \\&\qquad +\textstyle \sum \limits _{{l_1},l_2 = 0}^{\infty } {{g_i}( {{l_1}} )\mathrm {E}\{ {d( {{k - l_1}} )d^*{{( {{k - l_2}} )}}} \}g_i^*( {{l_2}} )}. \end{aligned}$$
(27)

On the other hand, the uncorrelation of \(\varDelta _{i_1}\) and \(\varDelta _{i_2}\) for \({i_1}\ne {i_2}\) yields

$$\begin{aligned}&\mathrm {E}\{d(k_1)d^*(k_2)\} \nonumber \\&\quad =\mathrm {diag}\{\mathrm {E}\{d_{1}(k_1)d_{1}^*(k_2)\},\cdots ,\mathrm {E}\{d_m(k_1)d_{m}^*(k_2)\} \}, \end{aligned}$$
(28)

where \(d_i\) is the ith element of d. Then each summand in the second term on the right-hand side of the equality of (27) can be written as

$$\begin{aligned}&{{g_i}( {{l_1}} )\mathrm {E}\{ {d( {{k - l_1}} )d^*{{( {{k - l_2}} )}}} \}g_i^*( {{l_2}} )} \nonumber \\&\quad =\textstyle \sum \limits _{i_1=1}^{m}{{g_{i i_1}}( {{l_1}} )g_{i i_1}( {{l_2}} )\mathrm {E}\{ {d_{i_1}( {{k - l_1}} )d_{i_1}{{( {{k - l_2}} )}}} \}}, \end{aligned}$$
(29)

where \(g_{i_1 i_2}(k)\) is referred to as the impulse response of the transfer function \(G_{{i_1 i_2}} := [G(z)]_{i_1 i_2}\). From (8), it holds that

$$\begin{aligned}&\mathrm {E}\{d_{i_1}(k-l_1)d_{i_2}(k-l_2)\} \nonumber \\&\quad = \textstyle \sum \limits _{ j_1=0 }^{{\bar{\tau }}_{i_1}} \textstyle \sum \limits _{ j_2=0 }^{{\bar{\tau }}_{i_1}}\delta ((k-l_1- j_1)-(k - l_2 -j_2)) \beta _{i_1,j_1j_2} \nonumber \\&\qquad \times \mathrm {E}\{u_{i_1}(k-l_1-j_1)u_{i_1}(k-l_2-j_2)\} \nonumber \\&\quad = \textstyle \sum \limits _{j=0}^{{\bar{\tau }}_{i_1}} \beta _{i_1,j(l_1+j-l_2)} \mathrm {E}\{u_{i_1}^2(k-l_1-j)\}. \end{aligned}$$
(30)

Thus, \(\Vert d_{i}\Vert _{\varvec{v}}^2 = \textstyle \sum \limits _{j=0}^{{\bar{\tau }}_{i}} \beta _{i,jj} \Vert u_{i}\Vert _{\varvec{v}}^2\), and, from (27),

$$\begin{aligned} \Vert u_i\Vert _{\varvec{v}}^2 =&\lim _{k \rightarrow \infty } \textstyle \sum \limits _{l_1,l_2=0}^{\infty }\textstyle \sum \limits _{i_1=1}^{m} g_{i i_1}(l_1)g_{i i_1}(l_2) \\&\times \textstyle \sum \limits _{j=0}^{{\bar{\tau }}_{i_1}} \beta _{i_1,j(l_1-l_2+j)} \mathrm {E}\{u_{i_1}^2(k-l_1-j)\}\\ =&\!\!\textstyle \sum \limits _{i_1=1}^{m}\!\!\Big [ \textstyle \sum \limits _{l_1=0}^{\infty }\textstyle \sum \limits _{l_2=0}^{\infty } g_{i i_1}(l_1)g_{i i_1}(l_2)\textstyle \sum \limits _{j=0}^{{\bar{\tau }}_{i_1}} \beta _{i_1,j(l_1\!-\!l_2\!+\!j)} \Big ]\Vert u_{i_1}\Vert _{\varvec{v}}^2\\ =&\!\!\textstyle \sum \limits _{i_1=1}^{m}\!\!\Big [ \textstyle \sum \limits _{l=0}^{\infty }\textstyle \sum \limits _{\lambda =-\infty }^{\infty } g_{i i_1}(l)g_{i i_1}(l-\lambda ) r_{i_1}(\lambda ) \Big ] \Vert u_{i_1}\Vert _{\varvec{v}}^2. \end{aligned}$$

By the definition of \({\mathcal {H}}_2\) norm, it shows that

$$\begin{aligned}&\textstyle \sum \limits _{l=0}^{\infty }\textstyle \sum \limits _{\lambda =-\infty }^{\infty } g_{i i_1}(l)g_{i i_1}(l-\lambda ) r_{i_1}(\lambda )\\&\quad = \Vert G_{i i_1}(z)\varPhi _{i_1}(z)\Vert _2^2, \end{aligned}$$

provided that \(\varPhi _{i_1}^\sim (z)\varPhi _{i_1}(z) = S_{i_1}(z)\), where \(\varPhi _i\) and \(S_i\) are the ith diagonal element of \(\varPhi \) and S, respectively. By the fact that

$$\begin{aligned} {\textstyle \sum \limits _{{l} = 0}^{\infty } {{g_i}( {{l}} )\varSigma _v g_i^*( {{l}} )} } = \textstyle \sum \limits _{i_1=1}^{m}\Vert G_{i i_1}(z)\Vert _2^2 \sigma _{v_{i_1}}^2, \end{aligned}$$

we have

$$\begin{aligned} \Vert u_i\Vert _{\varvec{v}}^2 = \textstyle \sum \limits _{i_1=1}^{m}\Vert G_{i i_1}\Vert _2^2 \sigma _{v_{i_1}}^2 + \textstyle \sum \limits _{i_1=1}^{m}\Vert G_{i i_1}\varPhi _{i_1}\Vert _2^2 \Vert u_{i_1}\Vert _{\varvec{v}}^2. \end{aligned}$$

As a result,

$$\begin{aligned} \begin{bmatrix} \Vert u_1\Vert _{\varvec{v}}^2\\ \vdots \\ \Vert u_m\Vert _{\varvec{v}}^2 \end{bmatrix} =&\begin{bmatrix} {\left\| {{G_{11}}} \right\| _2^2}&{} \cdots &{}{\left\| {{G_{1m}}} \right\| _2^2}\\ \vdots &{}{}&{} \vdots \\ {\left\| {{G_{m1}}} \right\| _2^2}&{} \cdots &{}{\left\| {{G_{mm}}} \right\| _2^2} \end{bmatrix}\begin{bmatrix} \sigma _{v_1}^2 \\ \vdots \\ \sigma _{v_m}^2 \end{bmatrix} \nonumber \\&+\begin{bmatrix} {\left\| {{G_{11}}{\varPhi _1}} \right\| _2^2}&{} \cdots &{}{\left\| {{G_{1m}}{\varPhi _m}} \right\| _2^2}\\ \vdots &{}{}&{} \vdots \\ {\left\| {{G_{m1}}{\varPhi _1}} \right\| _2^2}&{} \cdots &{}{\left\| {{G_{mm}}{\varPhi _m}} \right\| _2^2} \end{bmatrix} \begin{bmatrix} \Vert u_1\Vert _{\varvec{v}}^2\\ \vdots \\ \Vert u_m\Vert _{\varvec{v}}^2 \end{bmatrix}. \end{aligned}$$
(31)

Then a necessary and sufficient condition for the existence of finite and unique \(\Vert u_i\Vert _{\varvec{v}}^2\) is that the condition (13) holds, noting that \(G_{i_1 i_2}\varPhi _{i_2} = T_{i_1 i_2}W_{i_2}\). Meanwhile, with the fact that

$$\begin{aligned} \Vert \varepsilon _i\Vert _{\varvec{v}}^2 = \sigma _{v_i}^2 + \Vert d_i\Vert _{\varvec{v}}^2, \end{aligned}$$

the proof is established.

Appendix C Proof of Corollary 2

It follows from [18] that the inner part of \(\varGamma ^{\frac{1}{2}}M_\mathrm{\mathrm{in}}\varGamma ^{-\frac{1}{2}}\) can be constructed as

$$\begin{aligned} M_{\varGamma \mathrm{in}}=\left[ \begin{array}{l|l}{ \dfrac{1}{\lambda ^*}}&{}{{ \dfrac{\sqrt{|\lambda |^2-1}}{\lambda ^*}}\eta _{\varGamma }^*}\\ \hline {{ \dfrac{\sqrt{|\lambda |^2-1}}{\lambda ^*}}\eta _{\varGamma }}&{}{I-\left( 1+ \dfrac{1}{\lambda ^*}\right) \eta _{\varGamma }\eta _{\varGamma }^*},\end{array}\right] \end{aligned}$$

where \(\eta _{\varGamma } = \frac{\varGamma ^{-\frac{1}{2}}\eta }{\Vert \varGamma ^{-\frac{1}{2}}\eta \Vert _{2}}\). Thus, (21) directly yields that

$$\begin{aligned} \pi _i = |W_i(\lambda )|^2 (\lambda ^2 - 1) \end{aligned}$$

and completes the proof.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, J., Lu, J. & Su, W. Stabilizability of minimum-phase systems with colored multiplicative uncertainty. Control Theory Technol. 20, 382–391 (2022). https://doi.org/10.1007/s11768-022-00108-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11768-022-00108-9

Keywords

Navigation