August 8, 2025
Conference Paper

Interpretability, Explainability and Trust Metrics in Anomaly Detection Method for Power Grid Sensors

Abstract

Phasor Measurement Units (PMUs) are extensively utilized in the real-time monitoring and control of power systems, offering synchronized measurements of key parameters such as voltage and frequency. The widespread availability of PMU data has spurred the development of machine learning (ML) based data-driven algorithms for event monitoring, control, and grid stability. Concurrently, the interpretability of models has emerged as a critical area of focus in recent research. Understanding the workings of ML-based anomaly and event detection models is crucial for enhancing their practical usefulness and aiding future advancements. Our study introduces an interpretability framework specifically designed for binary classification models, which operates without needing direct access to the original machine learning model. This framework relies solely on the inputs of the dataset and the corresponding predictions from the model. It provides both broad global explanations and detailed local insights. We evaluated a previously developed ensemble anomaly detection and classification model using this framework. Our assessment highlights the framework's effectiveness, comparing the global interpretability obtained from the Global Surrogate Model with the localized explanations provided by the Shapley Additive explanations method (SHAP). Furthermore, the study highlights how the local SHAP method complements and enhances global interpretability.

Published: August 8, 2025

Citation

Wang D., T. Chen, and K. Mahapatra. 2025. Interpretability, Explainability and Trust Metrics in Anomaly Detection Method for Power Grid Sensors. In IEEE PES Grid Edge Technologies Conference & Exposition (Grid Edge 2025), January 21-23, 2025, San Diego, CA, 1-5. Piscataway, New Jersey:IEEE. PNNL-SA-192629. doi:10.1109/GridEdge61154.2025.10887434