Much of the nation’s critical infrastructures depend on cyber-physical systems (CPS) to manage essential and complex operational resources. The convergence of physical and cybersecurity processes and increasing integration of CPS with business and internet-based applications, has increased the prevalence and complexity of cyber threats. Red teaming for CPS, the process of challenging systems, involves a group of cybersecurity experts to emulate end-to-end cyberattacks following a set of realistic tactics, techniques, and procedures. However, current red teaming exercises require a high degree of skill to draft potential adversarial scenarios, which can be time consuming and costly to test in large-scale CPS with complex operational dynamics, vulnerabilities, and uncertainties. This is where machine learning (ML) can assist.
Three Pacific Northwest National Laboratory (PNNL) researchers, along with academic collaborators from Michigan State University (MSU), recently published a paper that uses automated ML methods to generate relevant adversarial scenarios for CPS. The paper titled “Automated Adversary Emulation for Cyber-Physical Systems via Reinforcement Learning” was accepted in the 2020 International Conference on Intelligence and Security Informatics (ISI), and sponsored under a Laboratory Directed Research and Development grant from the Mathematics for Artificial Reasoning in Science (MARS) initiative at PNNL.
“Our vision is to develop new capabilities that advance domain-aware and autonomous AI systems to reason and understand adversarial behavior to develop more robust security postures for CPS,” said Arnab Bhattacharya, an Operations Research scientist and principal investigator of the MARS project.
In addition to Bhattacharya, the paper was co-authored by Thiagarajan Ramachandran, and Chase Dowling from PNNL and Dr. Shaunak Bopardikar and his graduate student, Sandeep Banik, from MSU.
In the paper, the researchers proposed an automated, domain-aware approach to attack emulation without any human in the loop. A key contribution of the work was the development of a data-driven learning procedure that systematically exploits both cyber and physical vulnerabilities to determine high-quality attack sequences in a cyber-physical network. The artificial intelligence (AI) agent (with adversarial motives) interacts with a high-fidelity, simulated CPS environment and learns an attack plan based on the current and future consequence of its actions on system performance.
“We developed decision-making algorithms that could handle both the discrete (cyber) and the continuous (physical) dynamics together in one learning framework – something that was lacking in previous CPS security studies,” said Bhattacharya. “Our approach will enable CPS operators in developing pre-emptive mitigation strategies against complex and stealthy adversarial events that might be hard to detect in real-time”.
This year the theme of the ISI conference was AI and analytics for cybersecurity that attracted a strong pool of competition.
“The acceptance of the paper was a strong validation and interest in our work from domain experts in CPS security,” said Bhattacharya. “It sets the foundation for our ongoing work on using deep learning to improve scalability of our current approach and new methods for joint attack and defense exercise planning via machine learning.”