To realize the full potential of machine learning in diverse real- world domains, it is necessary for model predictions to be readily interpretable and actionable for the human in the loop. Analysts, who are the users but not the developers of machine learning models, often do not trust a model because of the lack of transparency in associating predictions with the underlying data space. To address this problem, we propose Rivelo, a visual analytic interface that enables analysts to understand the causes behind predictions of binary classifiers by interactively exploring a set of instance-level explanations. These explanations are model-agnostic, treating a model as a black box, and they help analysts in interactively probing the high-dimensional binary data space for detecting features relevant to predictions. We demonstrate the utility of the interface with a case study analyzing a random forest model on the sentiment of Yelp reviews about doctors.
Revised: May 22, 2017 |
Published: May 14, 2017
Citation
Tamagnini P., J.W. Krause, A. Dasgupta, and E. Bertini. 2017.Interpreting Black-Box Classifiers Using Instance-Level Visual Explanations. In Proceedings of the 2nd Workshop on Human-in-the-Loop Data Analytics (HILDA 2017) May 14-19, 2017, Chicago, Illinois, Article No. 6. New York, New York:ACM.PNNL-SA-124676.doi:10.1145/3077257.3077260