Research into explanation of machine learning models, i.e. explainable AI (XAI), has seen a sympathetic exponential growth alongside deep artificial neural networks throughout the past decade. For historical reasons explanation and trust have been intertwined. However this focus on trust is too narrow, and has led the research community astray from tried and true empirical methods that lead to more defensible scientific knowledge about people and explanations. To address this, we contribute a practical path forward for researchers in the XAI field. We recommend researchers focus on the utility and impact of their explanations instead of trust. We outline five broad use cases where explanations are useful and, for each, we describe pseudo-experiments that rely on objective empirical measurements and falsifiable hypotheses. We believe that this experimental rigor is necessary to contribute to scientific knowledge in the field of XAI.
Revised: February 9, 2021 |
Published: December 29, 2020
Citation
Davis B.F., M.F. Glenski, W.I. Sealy, and D.L. Arendt. 2020.Measure Utility, Gain Trust: Practical Advice for XAI Researchers. In IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX 2020), October 25-30, 2020, Salt Lake City, UT, 1-8. Piscataway, New Jersey:IEEE.PNNL-SA-155588.doi:10.1109/TREX51495.2020.00005