December 7, 2024
Journal Article

Role of Reinforcement Learning for Risk-Based Robust Control of Cyber-Physical Energy Systems

Abstract

Critical infrastructures such as energy, transportation, and water represent cyber-physical systems (CPS) with integrated information flow and physical operations that are vulnerable to natural and targeted failures. Safe, secure, and reliable operation and control of such systems is critical to ensure societal well-being and economic prosperity. Automated control is key for real-time operations and may be mathematically cast as a sequential decision-making problem under uncertainty. Emergence of data-driven techniques for decision making under uncertainty, such as reinforcement learning (RL), have led to promising advances for addressing sequential decision-making problems for risk-based robust control. This paper systematically analyzes the applicability of four types of RL methods (model-free, model-based, hybrid model-free and model-based, and hierarchical) for risk-based robust CPS control. Problem features and solution stability for the RL methods are also discussed. A motivating numerical example with a notional CPS control system is described and results using a risk-based, model free RL algorithm (Q-learning) are presented. Six key insights for future research and broader adoption of RL methods are identified, with specific emphasis on problem features, algorithmic explainability, and solution stability. A goal of this paper is also to encourage future research and discussion on extending the applicability of RL methods for risk-based robust and automated CPS control solutions.

Published: December 7, 2024

Citation

Du Y., S. Chatterjee, A. Bhattacharya, A. Dutta, and M. Halappanavar. 2023. Role of Reinforcement Learning for Risk-Based Robust Control of Cyber-Physical Energy Systems. Risk Analysis 43, no. 11:2280-2297. PNNL-SA-163685. doi:10.1111/risa.14104