March 19, 2020
Journal Article

Adaptive Power System Emergency Control using Deep Reinforcement Learning

Qiuhua Huang
Renke Huang
Weituo Hao
Jie Tan
Rui Fan


Power system emergency control is generally regarded as the final safety net for grid security and resiliency. Existing emergency control schemes are usually designed off-line based on either the conceived “worst” case scenarios or a few typical operation scenarios. These schemes are facing significant adaptiveness and robustness issues as increasing uncertainties and variations occur in modern electrical grids. To address these challenges, for the first time, this paper proposes a novel adaptive emergency control scheme using deep reinforcement learning (DRL), by leveraging the high-dimensional feature extraction and non-linear generalization capabilities DRL has for complex systems with high-dimensional variations. Furthermore, an open-source platform named DeepGrid has been designed for the first time to assist the DRL development and benchmarking processes in power system emergency control. Details of the platform, DRL, and emergency control schemes that use dynamic braking or under-voltage load shedding are presented. Extensive case studies performed in both two-area four-machine system and IEEE 39-Bus system have demonstrated the excellent performance and robustness of the proposed schemes.

Revised: March 19, 2020 | Published: March 2, 2020


Huang Q., R. Huang, W. Hao, J. Tan, R. Fan, and Z. Huang. 2020. "Adaptive Power System Emergency Control using Deep Reinforcement Learning." IEEE Transactions on Smart Grid 11, no. 2:1171-1182. PNNL-SA-140241. doi:10.1109/TSG.2019.2933191

Research topics