February 13, 2022
Conference Paper

Safe Reinforcement Learning for Emergency Load Shedding of Power Systems

Abstract

The paradigm shift in the electric power grid necessitates a revisit of existing control methods to ensure the grid’s security and resilience. In particular, the increased uncertainties and rapidly changing operational conditions in power systems have revealed outstanding issues in terms of either speed, adaptiveness, or scalability of the existing control methods for power systems. On the other hand, the availability of massive real-time data can provide a clearer picture of what is happening in the grid. Recently, deep reinforcement learning (RL) has been regarded and adopted as a promising approach leveraging massive data for fast and adaptive grid control. However, like most existing machine learning (ML)- based control techniques, RL control usually cannot guarantee the safety of the power systems. In this paper, we introduce a novel method for safe RL-based load shedding of power systems that can enhance the safe voltage recovery of the electric power grid after experiencing faults. Numerical simulation on the IEEE 39-bus testcase is performed to demonstrate the effectiveness of the proposed safe RL emergency control, as well as its adaptive capability to faults not seen in the training.

Published: February 13, 2022

Citation

Vu T., S. Mukherjee, T. Yin, R. Huang, J. Tan, and Q. Huang. 2021. Safe Reinforcement Learning for Emergency Load Shedding of Power Systems. In IEEE Power & Energy Society General Meeting (PESGM 2021), July 26-29, 2021, Washington DC, 1-5. Piscataway, New Jersey:IEEE. PNNL-SA-157689. doi:10.1109/PESGM46819.2021.9638007