July 26, 2024
Conference Paper

Convex Q-Learning in Continuous Time with Application to Dispatch of Distributed Energy Resources

Abstract

Convex Q-learning is a recent approach to reinforcement learning, motivated by the possibility of a firmer theory for convergence, and the possibility of making use of greater a priori knowledge regarding policy or value function structure. This paper explores algorithm design in the continuous time domain, with a finite-horizon optimal control objective. The main contributions are (i) The new Q-ODE: a model-free characterization of the Hamilton-Jacobi-Bellman equation. (ii) A formulation of Convex Q-learning that avoids approximations appearing in prior work. The Bellman error used in the algorithm is defined by filtered measurements, which is necessary in the presence of measurement noise. (iii) Convex Q-learning with linear function approximation is a convex program. It is shown that the constraint region is bounded, subject to an exploration condition on the training input. (iv) The theory is illustrated in application to resource allocation for distributed energy resources, for which the theory is ideally suited.

Published: July 26, 2024

Citation

Lu F., J. Mathias, S. Meyn, and K. Kalsi. 2023. Convex Q-Learning in Continuous Time with Application to Dispatch of Distributed Energy Resources. In Proceedings of the 62nd IEEE Conference on Decision and Control (CDC 2023), December 13-15, 2023, Singapore, 1529-1536. Piscataway, New Jersey:IEEE. PNNL-SA-197990. doi:10.1109/CDC49753.2023.10383620