Previous |  Up |  Next

Article

Keywords:
discounted Markov control process; deterministic control system; Euler equation; deterministic control system perturbed by a random noise
Summary:
This paper deals with Markov Control Processes (MCPs) on Euclidean spaces with an infinite horizon and a discounted total cost. Firstly, MCPs which result from the deterministic controlled systems will be analyzed. For such MCPs, conditions that permit to establish the equation known in the literature of Economy as Euler’s Equation (EE) will be given. There will be also presented an example of a Markov Control Process with deterministic controlled system where, to obtain the optimal value function, EE applied to the value iteration algorithm will be used. Secondly, the MCPs which result from the perturbation of deterministic controlled systems with a random noise will be dealt with. There will be also provided the conditions which allow to obtain the optimal value function and the optimal policy of a perturbed controlled system, in terms of the optimal value function and the optimal policy of deterministic controlled system corresponding. Finally, several examples to illustrate the last case mentioned will be presented.
References:
[1] Benveniste L. M., Scheinkman J. A.: On the differentiability of the value function in dynamic models of economics. Econometrica 47 (1979), 727–732 DOI 10.2307/1910417 | MR 0533081 | Zbl 0435.90031
[2] Bertsekas D. P.: Dynamic Programming: Deterministic and Stochastic Models. Prentice-Hall, Englewood Cliffs, New Jersey 1987 MR 0896902 | Zbl 0649.93001
[3] Cruz-Suárez D., Montes-de-Oca, R., Salem-Silva F.: Conditions for the uniqueness of optimal policies of discounted Markov decision processes. Math. Methods Oper. Res. 60 (2004), 415–436 DOI 10.1007/s001860400372 | MR 2106092 | Zbl 1104.90053
[4] Fuente A. De la: Mathematical Methods and Models for Economists. Cambridge University Press, New York 2000 MR 1735968 | Zbl 0943.91001
[5] Duffie D.: Security Markets. Academic Press, Boston 1988 MR 0955269 | Zbl 0861.90019
[6] Durán J.: On dynamic programming with unbounded returns. J. Econom. Theory 15 (2000), 339–352 DOI 10.1007/s001990050016 | MR 1789235 | Zbl 1101.91339
[7] Heer B., Maußner A.: Dynamic General Equilibrium Modelling: Computational Method and Application. Springer-Verlag, Berlin 2005 MR 2378171
[8] Hernández-Lerma O.: Adaptive Markov Control Processes. Springer-Verlag, New York 1989 MR 0995463
[9] Hernández-Lerma O., Lasserre J. B.: Discrete-Time Markov Control Processes: Basic Optimality Criteria. Springer-Verlag, New York 1996 MR 1363487 | Zbl 0840.93001
[10] Van C. Le, Morhaim L.: Optimal growth models with bounded or unbounded returns: a unifying approach. J. Econom. Theory 105 (2002), 158–187 DOI 10.1006/jeth.2001.2880 | MR 1912663 | Zbl 1013.91079
[11] Levhari D., Srinivasan T. N.: Optimal savings under uncertainty. Rev. Econom. Stud. 36 (1969), 153–164 DOI 10.2307/2296834
[12] Mirman L. J.: Dynamic models of fishing: a heuristic approach. In: Control Theory in Mathematical Economics (Pan-Tai Liu and J. G. Sutinen, eds.), Marcel Dekker, New York 1979, pp. 39–73 Zbl 0432.90024
[13] Rincón-Zapatero J. L., Rodríguez-Palmero C.: Existence and uniqueness of solutions to the Bellman equation in the unbounded case. Econometrica 71 (2003), 1519–1555 DOI 10.1111/1468-0262.00457 | MR 2000255 | Zbl 1160.49304
[14] Santos M. S.: Numerical solution of dynamic economic models. In: Handbook of Macroeconomic, Volume I (J. B. Taylor and M. Woodford, eds.), North Holland, Amsterdam 1999, pp. 311–386
[15] Stokey N. L., Lucas R. E.: Recursive Methods in Economic Dynamics. Harvard University Press, Cambridge, Mass. 1989 MR 1105087 | Zbl 0774.90018
Partner of
EuDML logo