Interpretability of Model-Driven Deception
PI: Bill Hofer

The Increased Interpretability for Model-Driven Deception project aims to increase the interpretability of a machine learning model for industrial control system physical processes. This project uses Pacific Northwest National Laboratory’s Shadow Figment system, a model-driven deception technology for deploying decoy industrial control systems devices in a cyber environment to entice and confuse potential cyber actors. Increased interpretability of the backend model used by Shadow Figment will allow cyber analysts and incident response personnel to more optimally deploy defensive deception campaigns.