Conference

International Conference on Machine Learning 2025

Join Pacific Northwest National Laboratory at the International Conference on Machine Learning in Vancouver, BC

ICML Conference

Illustration by Melanie Hess-Robinson | Pacific Northwest National Laboratory 

July 13–19, 2025

Vancouver Convention Center, Vancouver, BC 

The 42nd International Conference on Machine Learning (ICML) will feature the latest research from data scientists, computer scientists, and mathematicians at Pacific Northwest National Laboratory (PNNL), who will contribute through a series of presentations and posters highlighting the latest breakthroughs in artificial intelligence (AI) and machine learning (ML).

The widely known conference has established itself as one of the fastest-growing AI conferences in the world and as a premier destination for experts to gather and discuss breakthrough research in ML and its related domains, such as AI, statistics, and data science.

Throughout the nearly week-long conference, thought leaders and professionals from a wide range of backgrounds will explore the impact of ML in application areas like machine vision, speech recognition, robotics, and computational biology—ideating on how these advancements can be harnessed to address some of today’s most complex challenges in science and beyond.

Featured PNNL Presentation

Thursday, July 17

Machine Learning meets Algebraic Combinatorics: A Suite of Datasets Capturing Research-level Conjecturing Ability in Pure Mathematics

Photo of Henry Kvinge
Henry Kvinge, PNNL Researcher

Session: 10 a.m. – 11 a.m. (PT)

PNNL Presenters: Herman Chau (former intern), Helen Jenne, Davis Brown, Jesse He, Mark Raugas, and Henry Kvinge

Summary: Recent advances in AI system capabilities have fueled interest in utilizing ML for complex reasoning tasks, especially in the field of mathematics. While there are a wide range of datasets available for evaluating AI systems for grade-school-level to undergraduate-level mathematics, there are very few datasets that represent research-level mathematics. As we attempt to measure the performance of increasingly advanced systems, capturing the open-endedness and difficulty of research mathematics in benchmarks will be essential. To address this, researchers at PNNL have introduced a new collection of datasets, known as the Algebraic Combinatorics Dataset Repository (ACD Repo), a resource featuring foundational results and open problems in algebraic combinators. The dataset emphasizes the conjecturing process and offers millions of examples for exploring AI-driven insights and tools for advanced mathematical research. Learn more.

Featured PNNL Posters

Wednesday, July 16

Machines and Mathematical Mutations: Using GNNs to Characterize Quiver Mutation Classes

Jesse He
Jesse He, PNNL Researcher


PNNL Authors: Jesse He, Helen Jenne, Herman Chau (former intern), Davis Brown, Mark Raugas, and Henry Kvinge

Session: 11 a.m. – 1:30 p.m. (PT)

Summary: This work highlights how ML can be utilized to identify patterns across vast datasets coming from pure mathematics. The researchers applied graph neural networks (GNNs) to the study of quiver mutation, a key operation that is central to the theory of cluster algebras—with connections to geometry, topology, and physics. Of particular focus was determining whether one quiver can be transformed through a series of “mutations.” Using neural networks and AI explainability techniques, the researchers discovered mutation equivalence criteria for quivers of affine type D and found evidence that modern ML models can uncover abstract, mathematical rules without explicit training. Learn more.

Thursday, July 17

An Expressive and Self-Adaptive Dynamical System for Efficient Equation Learning

Ang Li
Ang Li, PNNL Researcher

PNNL Authors: Chuan Liu (current intern) and Ang Li

Session: 11 a.m. – 1:30 p.m. (PT)

Summary: Modern ML methods are powerful equation learners. However, their escalating complexity and high operational costs hinder sustainable development. Researchers propose EADS, an expressive and self-adaptive dynamical system that is capable of learning diverse equations with incredible efficiency. By incorporating hierarchical architectures, heterogeneous dynamics, and efficient on-device learning, EADS achieves higher accuracy, speed, and energy efficiency than traditional neural network solutions. Learn more.

Careers at PNNL

If you’re interested in working at PNNL, take a look at our open positions!