Expert-Derived Confidence
PI: Corey Fallon
Objective
Develop Expert-Derived Confidence (EDC) scores and explanations to improve reliance on machine learning (ML) performance by focusing on the following:
- Domain experts who will learn the performance boundaries of an ML model.
- Predicting model performance and explaining their prediction on a subset of training data to generate scores and explanations.
- Scores and explanations that will be generalized to unscored data using similarity metrics and their ability to predict model performance will be evaluated.
Overview
The Expert-Derived Confidence (EDC) project is focused on generating confidence scores and explanations to support power grid operators in their adoption and potential use of a machine learning classifier to improve grid reliability. The primary goal is to include expert-derived scores and associated explanations as part of a machine learning classifier’s recommendation to an end user, in order to help a user calibrate their reliance on the classifier.
Impact
- Providing operators and analysts needing support when working with ML tools to guide reliance decisions.
- ML confidence scores will be used to guide reliance on ML may not be sufficient due to inaccuracies and lack of transparency.
- EDC scores and explanations will provide a novel approach for improving the understanding of ML performance.
- These scores go beyond precision and recall to frame uncertainty in an analyst’s own terms.
Publications and Presentations
- Fallon, C. & Yin, T (2023). Method for Generating Expert Derived Confidence Scores. To be Published in Proceedings of the 2023 International Meeting of the Human Factors and Ergonomics Society. Washington D.C.