Pacific Northwest National Laboratory has developed a tool suite of interactive analytics that can be rapidly integrated into analyst workflows to empirically analyze and gain qualitative understanding of AI model performance jointly across multiple dimensions—accountability, transparency, fairness, and robustness.
Unlike solutions that are designed to support model developers during training, our interactive analytics are applied by model users at both the test and evaluation and the deployment and integration stages of the machine learning model development life cycle.
Our analytics focus on auditing model behavior for comparison and selection, confidence analysis, benchmarking and robustness analysis, and interactive investigations of transparency and fairness. The analyst will interact with model inputs and outputs instead of model parameters, as well as operational data, to investigate model behavior and generate a meta report influenced by the analyst’s interactive investigations.
Our capabilities allow analysts to answer important questions about model performance that go beyond the F1 score and answer some of the following questions:
Our approach enables informed model selection, benchmarking, and comparison for automated machine-learning-driven tools as well as understanding and identifying resilient, unbiased AI models used for situational awareness or insight discovery. Further, our solution: