We investigated the effects of different visual explanations on users' trust in machine learning classification. We proposed three forms of visual explanations of a classification based on identifying relevant training instances. We conducted a user study to evaluate these visual explanations as well as a no explanation condition. We measured users' trust of a classifier, quantified the effects of these three forms of explanations, and assessed the changes in users' trust. We found that participants trust a classifier appropriately when an explanation is available. The combination of human, classification algorithm and understandable explanation makes better decisions than the classifier and human alone.This work advances the state-of-the-art closer to building trust-able machine learning models and informs the design and appropriate use of automated systems.
Revised: May 1, 2020 |
Published: April 1, 2020
Citation
Yang F., Z. Huang, J. Scholtz, and D.L. Arendt. 2020.How Do Visual Explanations Foster End Users' Appropriate Trust in Machine Learning?. In Proceedings of the 25th International Conference on Intelligent User Interfaces (IUI 2020), March 17-20, 2020, Cagliari, Italy, 189–201. New York, New York:Association for Computing Machinery (ACM).PNNL-SA-138276.doi:10.1145/3377325.3377480