Conference

PNNL @ AAAI 2026

Researchers from PNNL showcase their work at the 40thAAAI Conference on Artificial Intelligence

PNNL @ AAAI 2026 with a pink and navy background color

Graphic by Kelly Machart | Pacific Northwest National Laboratory

January 20–27, 2026

Singapore 

Researchers from Pacific Northwest National Laboratory (PNNL) will be participating at the Fortieth Association for the Advancement of Artificial Intelligence (AAAI) conference in Singapore. 

The AAAI conference promotes research in AI and facilitates scientific exchange across AI. AAAI-26 will include technical paper presentations, special tracks, invited speakers, workshops, tutorials, poster sessions, competitions, and more.

At PNNL, our team of scientists and engineers utilize their expertise in chemistry, Earth sciences, and data analytics to push the boundaries of scientific exploration and develop innovative solutions to address the most pressing challenges in energy resilience and national security. More recently, PNNL launched the Center for AI, a research agenda that explores the foundations of AI and its future

PNNL Accepted Papers 

Beyond Protein Language Models: An Agentic LLM Framework for Mechanistic Enzyme Design

Authors: Simone Raugei and Khushbu Agarwal

We present Genie-CAT, a tool-augmented large language model (LLM) system designed to accelerate scientific hypothesis generation in protein design. Using metalloproteins (e.g., ferredoxins) as a case study, Genie-CAT integrates four capabilities—literature-grounded reasoning through retrieval-augmented generation (RAG), structural parsing of Protein Data Bank files, electrostatic potential calculations, and machine-learning prediction of redox properties—into a unified agentic workflow. By coupling natural-language reasoning with data-driven and physics-based computation, the system generates mechanistically interpretable, testable hypotheses linking sequence, structure, and function. In proof-of-concept demonstrations, Genie-CAT autonomously identifies residue-level modifications near [Fe–S] clusters that affect redox tuning, reproducing expert-derived hypotheses in a fraction of the time. The framework highlights how AI agents combining language models with domain-specific tools can bridge symbolic reasoning and numerical simulation, transforming LLMs from conversational assistants into partners for computational discovery.

Large Language Model-Based Reward Design for Deep Reinforcement Learning-Driven Autonomous Cyber Defense

Authors: Sayak Mukherjee, Samrat Chatterjee, Emilie Purvine, Ted Fujimoto, and Tegan Emerson

Designing rewards for autonomous cyberattack and defense learning agents in a complex, dynamic environment is a challenging task for subject matter experts. We propose a large language model (LLM)-based reward design approach to generate autonomous cyber defense policies in a deep reinforcement learning (DRL)-driven experimental simulation environment. Multiple attack and defense agent personas were crafted, reflecting heterogeneity in agent actions, to generate LLM-guided reward designs where the LLM was first provided with contextual cyber simulation environment information. These reward structures were then utilized within a DRL-driven attack-defense simulation environment to learn an ensemble of cyber defense policies. Our results suggest that LLM-guided reward designs can lead to effective defense strategies against diverse adversarial behaviors.

OPDR: Order-Preserving Dimension Reduction for Semantic Embedding of Multimodal Scientific Data

Authors: Luanzheng Guo and Nathan Tallent

Searching for the K-nearest neighbors (KNN) in multimodal data retrieval is computationally expensive, particularly due to the inherent difficulty in comparing similarity measures across different modalities. Recent advances in multimodal machine learning address this issue by mapping data into a shared embedding space; however, the high dimensionality of these embeddings (hundreds to thousands of dimensions) presents a challenge for time-sensitive vision applications. This work proposes Order-Preserving Dimension Reduction (OPDR), aiming to reduce the dimensionality of embeddings while preserving the ranking of KNN in the lower-dimensional space. One notable component of OPDR is a new measure function to quantify KNN quality as a global metric, based on which we derive a closed-form map between target dimensionality and key contextual parameters. We have integrated OPDR with multiple state-of-the-art dimension-reduction techniques, distance functions, and embedding models; experiments on a variety of multimodal datasets demonstrate that OPDR effectively retains recall high accuracy while significantly reducing computational costs.

 

Careers at PNNL

As a national laboratory that conducts an abundance of research using advanced mathematics, we are always searching for talented individuals looking to be a part of our mission.