September 17, 2024
Journal Article

Learning Constrained Parametric Differentiable Predictive Control Policies With Guarantees

Abstract

We present differentiable predictive control (DPC), a method for learning constrained adaptive neural control policies and dynamical models of unknown linear systems. DPC presents an approximate data-driven solution approach to the explicit Model Predictive Control (MPC) problem as a scalable alternative to computationally expensive multiparametric programming solvers. DPC is formulated as a constrained deep learning problem whose architecture is inspired by the structure of classical MPC. The optimization of the neural control policy is based on automatic differentiation of the MPC-inspired loss function through a differentiable closed-loop system model. This novel solution approach can optimize adaptive neural control policies for time-varying references while obeying state and input constraints without the prior need of an MPC controller. We show that DPC can learn to stabilize constrained neural control policies for systems with unstable dynamics. Moreover, we provide sufficient conditions for asymptotic stability of generic closed-loop system dynamics with neural feedback policies. In simulation case studies, we assess the performance of the proposed DPC method in terms of reference tracking, robustness, and computational and memory footprints compared against classical model-based and data-driven control approaches. We demonstrate that DPC scales linearly with problem size, compared to exponential scalability of classical explicit MPC based on multiparametric programming.

Published: September 17, 2024

Citation

Drgona J., A.R. Tuor, and D.L. Vrabie. 2024. Learning Constrained Parametric Differentiable Predictive Control Policies With Guarantees. IEEE Transactions on Systems, Man, and Cybernetics: Systems 54, no. 6:3596 - 3607. PNNL-SA-162577. doi:10.1109/TSMC.2024.3368026