Three PNNL-Authored Papers Accepted for the International Conference on Learning Representation
PNNL data scientists and machine learning experts highlight their work at the 11th International Conference on Learning Representations
Data scientists and machine learning (ML) experts from Pacific Northwest National Laboratory (PNNL) showcased their work at the International Conference on Learning Representations (ICLR). Three papers were accepted as posters to the ICLR 2023 Workshop on Physics for Machine Learning and Workshop on Mathematical and Empirical Understanding of Foundation Models (ME-FoMo).
ICLR, this year held in Kigali, Rwanda, is one of the newest key venues in artificial intelligence (AI), dating back to 2013, and is characterized by a specific focus on deep learning. ICLR is where many of the most impactful papers in the AI field are published, and its workshops exhibit some of the most cutting-edge ML work.
PNNL data scientists Davis Brown, Jonathan Tu, and Henry Kvinge and post doctorates Charles Godfrey, Cody Nizinski, and Michael Rawson continue to advance research by applying AI to scientific problems, as demonstrated by the following accepted papers:
- Exploring the Representation Manifolds of Stable Diffusion Through the Lens of Intrinsic Dimension, by Henry Kvinge, Davis Brown, and Charles Godfrey
- Robustness of Edited Neural Networks, by Davis Brown, Charles Godfrey, Cody Nizinski, Jonathan Tu, and Henry Kvinge
- Fast Computation of Permutation Equivariant Layers with the Partition Algebra, by Charles Godfrey, Michael G. Rawson, Davis Brown, and Henry Kvinge
Each of the ICLR workshops dive into a specific and important area of the AI/ML field to capture new and exciting research capabilities. “The Workshop on ME-FoMo centers on the rise of AI networks, typically large (billion parameter scale) neural networks trained on vast data sets with the capacity to adapt to a wide variety of tasks,” said Brown. “Training and evaluating foundation models is a new PNNL capability. Despite getting amazing performance out of models like ChatGPT/GPT-4 and Stable Diffusion, the community still does not have a good mathematical theory of why these models work so well, and experimentally, we are still in the early days.”
Brown also mentioned, “While there are many exciting applications of ML to physical sciences, including those of the PNNL’s Physics-Informed Learning Machines initiative, the ICLR Workshop on Physics for Machine Learning looks at applications in the other direction, using ideas from physics to better understand ML.”
This innovative research was supported by the Mathematics for Artificial Reasoning in Science initiative at PNNL, specifically the AI with Concepts and Artificial Intelligence Tools for Advanced Manufacturing Processes projects.
Published: May 8, 2023