Conference

Annual Conference on Neural Information Processing Systems (NeurIPS) 2025

Join Pacific Northwest National Laboratory (PNNL) at the NeurIPS Conference in San Diego, California!  

blue and orange background with city scape of san diego

Graphic created by Melanie Hess-Robinson | Pacific Northwest National Laboratory 

December 2–7, 2025

San Diego, California 

Data scientists and engineers from Pacific Northwest National Laboratory (PNNL) will be at the Thirty-Ninth Conference on Neural Information Processing Systems (NeurIPS) to present talks, lead workshops, and participate in competitions.

The annual NeurIPS conference brings together researchers from various fields, including machine learning (ML), neuroscience, life sciences, and statistics, among many others. The remarkable advancements in ML and AI have paved the way for a new era of applications, revolutionizing various aspects of our daily lives. From situational awareness to analyzing and detecting threats and interpreting online signals to ensure system reliability, researchers at PNNL are at the forefront of scientific exploration and national security, harnessing the power of AI to tackle complex scientific problems.

PNNL Accepted Papers

A Connection Between Score Matching and Local Intrinsic Dimension

PNNL authors: Eric Yeats, Aaron Jacobson, Darryl Hannan, Yiran Jia, Timothy Doster, Henry Kvinge, and Scott Mahan

The local intrinsic dimension (LID) of data is a fundamental quantity in signal processing and learning theory, but quantifying the LID of high-dimensional, complex data has been a historically challenging task. Recent works have discovered that diffusion models capture the LID of data through the spectra of their score estimates and through the rate of change of their density estimates under various noise perturbations. While these methods can accurately quantify LID, they require either many forward passes of the diffusion model or use of gradient computation, limiting their applicability in compute- and memory-constrained scenarios.

We show that the LID is a lower bound on the denoising score matching loss, motivating use of the denoising score matching loss as an LID estimator. Moreover, we show that the equivalent implicit score matching loss also approximates LID via the normal dimension and is closely related to a recent LID estimator, FLIPD. Our experiments on a manifold benchmark and with Stable Diffusion 3.5 indicate that the denoising score matching loss is a highly competitive and scalable LID estimator, achieving superior accuracy and memory footprint under increasing problem size and quantization level.

A Counterfactual Semantics for Hybrid Dynamical Systems

PNNL author: Jeremy Zucker

Models of hybrid dynamical systems are widely used to answer questions about the causes and effects of dynamic events in time. Unfortunately, existing causal reasoning formalisms lack support for queries involving the dynamically triggered, discontinuous interventions that characterize hybrid dynamical systems. This mismatch can lead to ad hoc and error-prone causal analysis workflows in practice. To bridge the gap between the needs of hybrid systems users and current causal inference capabilities, we develop a rigorous counterfactual semantics by formalizing interventions as transformations to the constraints of hybrid systems. Unlike interventions in a typical structural causal model, however, interventions in hybrid systems can easily render the model ill-posed. Thus, we identify mild conditions under which our interventions maintain solution existence, uniqueness, and measurability by making explicit connections to established hybrid systems theory. To illustrate the utility of our framework, we formalize a number of canonical causal estimands and explore a case study on the probabilities of causation with applications to fishery management. Our work simultaneously expands the modeling possibilities available to causal inference practitioners and begins to unlock decades of causality research for users of hybrid systems.

ML-Guided Primal Heuristics for Mixed-Binary Quadratic Programs

PNNL author: Natalie Isenberg

Mixed Binary Quadratic Programs (MBQPs) are classic problems in combinatorial optimization. As solving large-scale combinatorial optimization problems is challenging, primal heuristics have been developed to quickly identify high-quality solutions within a short amount of time. Recently, a growing body of research has also used machine learning (ML) to accelerate solution methods for challenging combinatorial optimization problems. Despite the increasing popularity of these ML-guided methods, a large body of work has focused on Mixed-Integer Linear Programs (MILPs). 

MBQPs are challenging to solve due to the combinatorial complexity coupled with nonlinearities. This work proposes ML-guided primal heuristics for MBQPs by adapting and extending existing work on ML-guided MILP solution prediction to MBQPs. We propose a new neural-network architecture for MBQP solution prediction and a new data collection procedure for training. Moreover, we propose to combine binary cross-entropy loss and contrastive loss in solution prediction. We compare the methods on standard and real-world MBQP benchmarks and show that our proposed methods outperform advanced solvers and existing primal heuristics.

High-Accuracy Neural-Network Quantum States via Randomized Real-Time Dynamics

PNNL author: John Martyn

We introduce a new algorithm to enhance the accuracy of neural-network quantum states (NQSs) in approximating ground states of quantum systems. Our method, termed dynamical averaging, estimates expectation values by sampling an NQS as it evolves in real-time. Using techniques from quantum information, we prove an up-to-quadratic suppression of error in estimating arbitrary ground state observables. Importantly, this improved accuracy requires neither further energy minimization nor an enhanced parameterization but rather exploits the influence of randomness on quantum states. We demonstrate the advantage of dynamical averaging on an important spin model in quantum many-body physics, where it reduces relative errors in correlation functions from ~10% to 1%.

If you are interested in working at PNNL, take a look at our open positions!