AbstractWe executed a pilot demonstration of a methodology for developing a new confidence metric to help operators calibrate their trust in ML event classifiers. This confidence metric was derived from domain expert judgment and was accompanied with a qualitative description describing the reason for each confidence rating. After learning the boundaries of an ML’s performance by studying a subset of events an SME rated his confidence in the ML’s ability to classify similar events and provided an explanation for his ratings. To demonstrate this methodology, we developed our expert driven confidence scores for the ML event classifier within the ESAMS. Next, we assessed the accuracy of the human expert confidence scores relative to the ML’s uncertainty quantification scores. This report includes a description of our methodology, summary of our findings and future directions.
Published: October 25, 2023