January 13, 2023
Journal Article

Learning and Fast Adaptation for Grid Emergency Control via Deep Meta Reinforcement Learning

Abstract

As power systems are undergoing a significant trans- formation with more uncertainties, less inertia and closer to operation limits, there is increasing risk of large outages. Thus, there is an imperative need to enhance grid emergency control to maintain system reliability and security. Towards this end, great progress has been made in developing deep reinforcement learning (DRL) based grid control solutions in recent years. However, existing DRL-based solutions have two main limitations: 1) they cannot handle well with a wide range of grid operation conditions, system parameters, and contingencies; 2) they generally lack the ability to fast adapt to new grid operation conditions, system parameters, and contingencies, limiting their applicability for real-world applications. In this paper, we mitigate these limitations by developing a novel deep meta-reinforcement learning (DMRL) algorithm. The DMRL combines the meta strategy optimization together with DRL, and trains policies modulated by a latent space that can quickly adapt to new scenarios. We test the developed DMRL algorithm on the IEEE 300-bus system. We demonstrate fast adaptation of the meta- trained DRL polices with latent variables to new operating conditions and scenarios using the proposed method, which achieves superior performance compared to the state-of-the-art DRL and model predictive control (MPC) methods.

Published: January 13, 2023

Citation

Huang R., Y. Chen, T. Yin, Q. Huang, J. Tan, W. Yu, and X. Li, et al. 2022. Learning and Fast Adaptation for Grid Emergency Control via Deep Meta Reinforcement Learning. IEEE Transactions on Power Systems 37, no. 6:4168-4178. PNNL-SA-179153. doi:10.1109/TPWRS.2022.3155117