Previous |  Up |  Next

Article

Title: Discrete-time Markov control processes with discounted unbounded costs: Optimality criteria (English)
Author: Hernández-Lerma, Onésimo
Author: Muñoz de Ozak, Myriam
Language: English
Journal: Kybernetika
ISSN: 0023-5954
Volume: 28
Issue: 3
Year: 1992
Pages: 191-212
.
Category: math
.
MSC: 49J45
MSC: 60J99
MSC: 93B55
MSC: 93C55
MSC: 93E03
MSC: 93E20
idZBL: Zbl 0771.93054
idMR: MR1174656
.
Date available: 2009-09-24T18:31:44Z
Last updated: 2012-06-05
Stable URL: http://hdl.handle.net/10338.dmlcz/124587
.
Reference: [1] A. Bensoussan: Stochastic control in discrete time and applications to the theory of production.Math. Programm. Study 18 (1982), 43-60. MR 0656937
Reference: [2] D. P. Bertsekas: Dynamic Programming: Deterministic and Stochastic Models.Prentice-Hall, Englewood Cliffs, N.J. 1987. Zbl 0649.93001, MR 0896902
Reference: [3] D. P. Bertsekas, S. E. Shreve: Stochastic Optimal Control: The Discrete Time Case.Academic Press, New York 1978. Zbl 0471.93002, MR 0511544
Reference: [4] R.N. Bhattacharya, M. Majumdar: Controlled semi-Markov models - the discounted case.J. Statist. Plann. Inference 21 (1989), 365-381. Zbl 0673.93089, MR 0995606
Reference: [5] D. Blackwell: Discounted dynamic programming.Ann. Math. Statist. 36 (1965), 226-235. Zbl 0133.42805, MR 0173536
Reference: [6] R.S. Bucy: Stability and positive supermartingales.J. Diff. Eq. 1 (1965), 151-155. Zbl 0203.17505, MR 0191005
Reference: [7] R. Cavazos-Cadena: Finite-state approximations for denumerable state discounted Markov decision processes.Appl. Math. Optim. 11, (1986), 1-26. Zbl 0606.90132, MR 0826849
Reference: [8] M.H.A. Davis: Martingale methods in stochastic control.Lecture Notes in Control and Inform. Sci. 16 (1979), 85-117. Zbl 0409.93052, MR 0547467
Reference: [9] E. B. Dynkin, A. A. Yushkevich: Controlled Markov Processes.Springer-Verlag, New York 1979. MR 0554083
Reference: [10] O. Hernández-Lerma: Lyapunov criteria for stability of differential equations with Markov parameters.Bol. Soc. Mat. Mexicana 24 (1979), 27-48. MR 0579667
Reference: [11] O. Hernández-Lerma: Adaptive Markov Control Processes.Springer-Verlag, New York 1989. MR 0995463
Reference: [12] O. Hernández-Lerma, R. Cavazos-Cadena: Density estimation and adaptive control of Markov processes: average and discounted criteria.Acta Appl. Math. 20 (1990), 285-307. MR 1081591
Reference: [13] O. Hernández-Lerma, J. B. Lasserre: Average cost optimal policies for Markov control processes with Borel state space and unbounded costs.Syst. Control Lett. 15 (1990), 349-356. MR 1078813
Reference: [14] O. Hernández-Lerma, J. B. Lasserre: Value iteration and rolling plans for Markov control processes with unbounded rewards.J. Math. Anal. Appl. (to appear). MR 1224804
Reference: [15] O Hernández-Lerma, J. B. Lasserre: Error bounds for rolling horizon policies in discrete-time Markov control processes.IEEE Trans. Automat. Control 35 (1990), 1118-1124. MR 1073256
Reference: [16] O. Hernández-Lerma R. Montes de Oca, R. Cavazos-Cadena: Recurrence conditions for Markov decision processes with Borel state space: a survey.Ann. Oper. Res. 28 (1991), 29-46. MR 1105165
Reference: [17] O. Hernández-Lerma, W. Runggaldier: Monotone approximations for convex stochastic control problems (submitted for publication).
Reference: [18] K. Hinderer: Foundations of Non-Stationary Dynamic Programming with Discrete Time Parameter.Springer-Verlag, Berlin - Heidelberg - New York 1970. Zbl 0202.18401, MR 0267890
Reference: [19] A. Hordijk, H.C. Tijms: A counterexample in discounted dynamic programming.J. Math. Anal. Appl. 39 (1972), 455-457. Zbl 0238.49017, MR 0307666
Reference: [20] H.J. Kushner: Optimal discounted stochastic control for diffusion processes.SIAM J. Control 5 (1967), 520-531. Zbl 0178.20003, MR 0224388
Reference: [21] S.A. Lippman: On the set of optimal policies in discrete dynamic programming.J. Math. Anal. Appl. 24 (1968), 2,440-445. Zbl 0194.20602, MR 0231615
Reference: [22] S.A. Lippman: On dynamic programming with unbounded rewards.Manag. Sci. 21 (1975), 1225-1233. Zbl 0309.90017, MR 0398535
Reference: [23] P. Mandl: On the variance in controlled Markov chains.Kybernetika 7 (1971), 1, 1-12. Zbl 0215.25902, MR 0286178
Reference: [24] P. Mandl: A connection between controlled Markov chains and martingales.Kybernetika 9 (1973), 4, 237-241. Zbl 0265.60060, MR 0323427
Reference: [25] S.P. Meyn: Ergodic theorems for discrete time stochastic systems using a stochastic Lyapunov function.SIAM J. Control Optim. 27 (1989), 1409-1439. Zbl 0681.60067, MR 1022436
Reference: [26] U. Rieder: On optimal policies and martingales in dynamic programming.J. Appl. Probab. 13 (1976), 507-518. Zbl 0353.90091, MR 0421683
Reference: [27] U. Rieder: Measurable selection theorems for optimization problems.Manuscripta Math. 24 (1978), 115-131. Zbl 0385.28005, MR 0493590
Reference: [28] M. Schäl: Estimation and control in discounted stochastic dynamic programming.Stochastics 20 (1987), 51-71. MR 0875814
Reference: [29] J. Wessels: Markov programming by successive approximations with respect to weighted supremum norms.J. Math. Anal. Appl. 58 (1977), 326-335. Zbl 0354.90087, MR 0441363
Reference: [30] W. Whitt: Approximations of dynamic programs.I. Math. Oper. Res. 4 (1979), 179-185. Zbl 0408.90082, MR 0543929
.

Files

Files Size Format View
Kybernetika_28-1992-3_3.pdf 1.058Mb application/pdf View/Open
Back to standard record
Partner of
EuDML logo