Agatha Murgoci, Aarhus University

"Time Inconsistent Stochastic Control in Continuous Time: Theory and Example"

Abstract

We study a class of continuous-time stochastic control problems which, in various ways, are time-inconsistent in the sense that they do not admit a Bellman optimality principle. We study these problems within a game-theoretic framework, and we look for Nash subgame perfect equilibrium points. For a general controlled continuous-time Markov process and a fairly general objective functional, we derive an extension of the standard Hamilton–Jacobi–Bellman equation, in the form of a system of nonlinear equations, for the determination of the equilibrium strategy as well as the equilibrium value function. The main theoretical result is a verification theorem. As applications of the general theory, we study the mean variance problem with state dependent and horizon dependent risk aversion. We also present a study of time-inconsistency within the framework of a general equilibrium production economy.

Contact person: Peter Norman Sørensen