April 27, 2023
Journal Article

Reinforcement Learning of Structured Stabilizing Control for Linear Systems with Unknown State Matrix

Abstract

This paper delves into designing feedback control gains for a continuous-time linear quadratic regulator (LQR) problem that is constrained to certain predefined structure with unknown state matrix. We bring forth the ideas from reinforcement learning (RL) in conjunction with sufficient stability and performance guarantees in order to design these structured gains using the trajectory measurements of states and controls. We first formulate a model-based framework using dynamic programming (DP) to embed the structural constraint to the LQR gain computation in the continuous-time setting, and then subsequently, formulate a policy iteration RL algorithm that can alleviate the requirement of known state matrix in conjunction with maintaining the feedback gain structure. The design enables a distributed learning control design which is necessary for many large-scale cyber-physical systems. Theoretical guarantees are provided for stability and convergence of the structured reinforcement learning (SRL) algorithm. We validate our theoretical results with numerical simulations on a multi-agent networked linear time-invariant (LTI) dynamic system.

Published: April 27, 2023

Citation

Mukherjee S., and T. Vu. 2023. Reinforcement Learning of Structured Stabilizing Control for Linear Systems with Unknown State Matrix. IEEE Transactions on Automatic Control 68, no. 3:1746 - 1752. PNNL-SA-156272. doi:10.1109/TAC.2022.3155384