February 27, 2022
Conference Paper

CrossCheck: Rapid, Reproducible, and Interpretable Model Evaluation

Abstract

Evaluation beyond aggregate performance metrics, e.g. F1-score, is crucial to both establish an appropriate level of trust in machine learning models and identify future model improvements. In this paper we demonstrate CrossCheck, an interactive visualization tool for rapid crossmodel comparison and reproducible error analysis. We describe the tool and discuss design and implementation details. We then present three use cases (named entity recognition, reading comprehension, and clickbait detection) that show the benefits of using the tool for model evaluation. CrossCheck allows data scientists to make informed decisions to choose between multiple models, identify when the models are correct and for which examples, investigate whether the models are making the same mistakes as humans, evaluate models’ generalizability and highlight models’ limitations, strengths and weaknesses. Furthermore, CrossCheck is implemented as a Jupyter widget, which allows rapid and convenient integration into data scientists’ model development workflows.

Published: February 27, 2022

Citation

Arendt D.L., Z.H. Shaw, P. Shrestha, E.M. Ayton, M.F. Glenski, and S. Volkova. 2021. CrossCheck: Rapid, Reproducible, and Interpretable Model Evaluation. In Workshop on Data Science with Human-in-the-loop: Language Advances (DaSH-LA 2021) colocated with NAACL 2021, June 11, 2021 Virtual, Online, edited by E. Dragut, et al, 79 - 85. Stroudsburg, Pennsylvania:Association for Computational Linguistics. PNNL-SA-151151. doi:10.18653/v1/2021.dash-1.13