May 30, 2017
Conference Paper

ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation

Abstract

Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as “black-boxes” due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user’s data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.

Published: May 30, 2017

Citation

Hohman F.M., N.O. Hodas, and D. Chau. 2017. ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI EA 2017), May 6-11, 2017, Denver, Colorado, 1694-1699. New York, New York:ACM. PNNL-SA-126087. doi:10.1145/3027063.3053103