December 19, 2023

Guiding Artificial Intelligence Terminology Standards

Researchers comment on the “Adversarial Machine Learning” white paper from NIST

Deep learning

Pacific Northwest National Laboratory undertakes a wide variety of artificial intelligence research.

(Image by Donald Jorgensen | Pacific Northwest National Laboratory)

As artificial intelligence (AI) rapidly evolves, establishing a common language to describe approaches and techniques within the field becomes increasingly important. Thus, in response to a white paper recently released by the National Institute of Standards and Technology (NIST), researchers from Pacific Northwest National Laboratory (PNNL) provided commentary to help shape this language.

While the white paper focused on adversarial machine learning, the PNNL team urged the authors to broaden their terminology to encompass the wholistic field of secure AI.

“As the institute for standards, NIST has a huge opportunity to define the terms used in artificial intelligence,” said Luke Richards, PNNL data scientist and first author on the public commentary. “Researchers and research sponsors alike turn to NIST to obtain precise meanings of terms, and this common understanding can provide a foundation for testing and evaluation frameworks.”

Richards worked with fellow PNNL researchers Charles Godfrey, Elise Bishoff, Kyle Bingman, Jeremiah Rounds, Theodore Nowak, Rob Jasper, Marisa DeCillis, and Courtney (Court) Corley to draft a series of comments to shape future versions of the NIST AI Risk Management Framework.

“The NIST framework has potential far-reaching impacts across government, academic, and other sectors,” said Bishoff. “As experts in AI assurance, our team worked to provide commentary to make this framework as impactful as possible.”

PNNL researchers are well-equipped to offer such feedback by drawing on the Laboratory’s diverse strengths in AI. From physics-informed machine learning research to co-designing novel architectures for AI applications, PNNL supports a wide variety of AI research projects. These include MegaAI to build foundation models for science and security, Mathematics for Artificial Reasoning in Science, and Adaptive Tunability for Synthesis and Control via Autonomous Learning on Edge.

The newly established Center for AI @PNNL—led by Corley—coordinates AI-related research at the Lab to advance the frontiers of AI research. The Center for AI @PNNL also works to apply AI to mission areas of science, security, and energy resilience, and advocate access to world-class operational infrastructure and AI capabilities.