January 10, 2023
Staff Accomplishment

Luttman Contributes to AI for U.S. Army Workshop

Workshop explores robust, secure artificial intelligence for army needs

Aaron Luttman

Aaron Luttman

(Photo by Andrea Starr | Pacific Northwest National Laboratory)

Pacific Northwest National Laboratory mathematician Aaron Luttman was part of the organizing committee for “Artificial Intelligence and Justified Confidence,” a National Academies of Science, Engineering, and Medicine (NASEM) workshop to explore robust machine learning (ML) and other artificial intelligence (AI) opportunities for the U.S. Army. The workshop, organized by NASEM’s Board on Army Research and Development took place in September 2022, with speakers presenting on successes and challenges of ML and AI systems development and deployment in academia, industry, and government.  

Army commanders and tactical leaders face complex operations in environments with limited connectivity, so it is vital they trust ML and AI algorithms that support decision-making. To be successful, they must know the vulnerabilities and limitations of ML and AI systems. One primary focus of the workshop was to provide a venue for learning how industry and other branches of the military have successfully integrated ML and AI tools. 

Luttman’s expertise in ML and AI assurance, specifically focused on the security and vulnerabilities in AI systems, made him a valuable addition to the organizing committee, which included members from industry, the Department of Defense, and academia.   

“It was a unique opportunity to support the Army in their development of a framework for deploying ML and AI systems for command and control,” said Luttman. “Our team brought together a stunning diversity of intellects and viewpoints that address many of the dimensions the military needs to account for when deploying ML and AI in high-consequence operational environments.” 

During the workshop, a broad range of researchers in ML and AI and adjacent fields laid out a framework of considerations the U.S. Army can use in defining justified confidence in specific ML and AI solutions. The framework informs the philosophy of how and when to deploy ML and AI technologies in command and control environments.  

“There are a lot of angles one has to see in order to have confidence in AI systems, including their security, safety, bias and ethics, and legal and policy frameworks, as well as how AI integrates into and interoperates with larger systems of systems,” said Luttman. “And not all high consequence environments are the same. You have to know the challenges in each battlefield scenario to understand how to balance the elements of system development for specific applications. We wanted to give the army a sense of what goes on behind the scenes to take ML and AI applications into the field and make them reliable when it when it really matters.” 

Video recordings of a subset of the workshop presentations are available now. 

Published: January 10, 2023