In this paper, we present a model-free reinforcement learning (RL)-based technique to solve the power management problem of networked microgrids (MGs), when the utility has
only limited or no detailed knowledge of the MG asset models.The proposed method consists of two levels: at the first level, using an approximate RL strategy and aggregated data, a utility agent determines the locational price signals to maximize its profit under incomplete information on the MGs’ private data behind their points of common coupling (PCC). The proposed RL strategy enables the utility to adapt to changing system conditions
and learn from previous experiences, while respecting the data ownership of the MGs. At the second level, the MGs receive the price signals to dispatch their generation/storage assets individually over look-ahead decision time window, while considering power flow constraints. The performance of the proposed RL technique has been verified under realistic operational scenarios.
Revised: March 26, 2020 |
Published: March 2, 2020
Citation
Zhang Q., K. Dehghanpour, Z. Wang, and Q. Huang. 2020.A Learning-based Power Management Method for Networked Microgrids Under Incomplete Information.IEEE Transactions on Smart Grid 11, no. 2:1193-1204.PNNL-SA-141774.doi:10.1109/TSG.2019.2933502