Conference

36th AAAI Conference on Artificial Intelligence

February 22 - March 1

AAAI Virtual Conference
February 22-March 1, 2022

PNNL will be at the 36th AAAI Conference on Artificial Intelligence on Feb. 22- March 1. Stop by our virtual booth to learn more about what it's like working for a national laboratory and the exciting scientific breakthroughs happening at PNNL. Researchers, managers, and recruiters will be available to answer your questions and talk with you about opportunities.

Video: Pacific Northwest National Laboratory

Below is a full list of PNNL presenters who will share their AI research at the 36th AAAI Conference.

PNNL Talks and Accepted Papers

Hardware Acceleration of Message-Passing Networks for Quantum Chemistry

Photo of Jenna Pope, Sutanay Choudhury

Presentation | AI for Design and Manufacturing Workshop

Authors: Mike Kraus · Jenna Pope · Hatem Helal · Manuel Lopez Roldan · Lakshmi Krishnan · Sotiris Xantheas · Sutanay Choudhury

Abstract: The application of machine-learning techniques such as supervised learning and generative models in chemistry is an active research area. ML-driven prediction of chemical properties and generation of molecular structures with tailored properties have emerged as attractive alternatives to expensive computational methods.  Improving the scalability of training these machine-learning models is further essential to accelerate scientific discovery, and molecular datasets have unique computational characteristics for processing millions of small graphs with widely varying size and sparsity characteristics.  In this work, we demonstrate the effectiveness of the novel Graphcore IPU (Intelligent Processing Units) architecture for accelerating graph neural networks and continuous-convolutional neural networks on multiple datasets.  Our experiments show that IPU's support for fine-grained parallelism accompanied by a scheme for optimal packing of many small graphs achieves state-of-the-art accuracy while reducing training time by orders of magnitude from previously reported results.

Differential Property Prediction: A Machine Learning Approach to Experimental Design in Advanced Manufacturing

Composite image of authors AAAI

Accepted paper | AI for Design and Manufacturing Workshop

Authors: Loc Truong · WoongJo Choi · Colby Wight · Lizzy Coda · Tegan Emerson · Keerti Kappagantula · Henry Kvinge

Abstract: Advanced manufacturing techniques have enabled the production of materials with state-of-the-art properties. In many cases however, the development of physics-based models of these techniques lags behind their use in the lab. This means that designing and running experiments proceeds largely via trial and error. This is sub-optimal since experiments are cost- , time-, and labor-intensive. In this work we propose a machine learning framework, differential property classification (DPC), which enables an experimenter to leverage machine learning’s unparalleled pattern matching capability to pursue data-driven experimental design. DPC takes two possible experiment parameter sets and outputs a prediction of which will produce a material with a more desirable property specified by the operator. We demonstrate the success of DPC on AA7075 tube manufacturing process and mechanical property data using shear assisted processing and extrusion (ShAPE), a solid phase processing technology. We show that by focusing on the experimenter’s need to choose between multiple candidate experimental parameters, we can reframe the challenging regression task of predicting material properties from processing parameters, into a classification task on which machine learning models can achieve good performance.

Latent Space Simulation for Carbon Capture Design Optimization

Photo composite of Yucheng Fu, Jie Bao, and Zhijie Xu

Accepted paper

Authors: Brian Bartoldson · Rui Wang · Yucheng Fu · David Widemann · Sam Nguyen · Jie Bao · Zhijie Xu · Brenda Ng

Abstract: The CO2 capture efficiency in solvent-based carbon capture systems (CCSs) critically depends on the gas-solvent interfacial area (IA), making maximization of IA a foundational challenge in CCS design. While the IA associated with a particular CCS design can be estimated via a computational fluid dynamics (CFD) simulation, using CFD to derive the IAs associated with numerous CCS designs is prohibitively costly. Fortunately, previous works such as Deep Fluids (DF) (Kim et al., 2019) show that large simulation speedups are achievable by replacing CFD simulators with neural network (NN) surrogates that faithfully mimic the CFD simulation process. This raises the possibility of a fast, accurate replacement for a CFD simulator and therefore efficient approximation of the IAs required by CCS design optimization. Thus, here, we build on the DF approach to develop surrogates that can successfully be applied to our complex carbon-capture CFD simulations. Our optimized DF-style surrogates produce large speedups (4000x) while obtaining IA relative errors as low as 4% on unseen CCS configurations that lie within the range of training configurations. This hints at the promise of NN surrogates for our CCS design optimization problem. Nonetheless, DF has inherent limitations with respect to CCS design (e.g., limited transferability of trained models to new CCS packings). We conclude with ideas to address these challenges.

Corpus-Based and Knowledge-Based Measures of Text Semantic Similarity

Court Corley

Classic Paper Honorable Mention 

Authors: Rada Mihalcea · Court Corley · Carlo Strapparava

Abstract: This paper presents a method for measuring the semantic similarity of texts, using corpus-based and knowledge-based measures of similarity. Previous work on this problem has focused mainly on either large documents (e.g. text classification, information retrieval) or individual words (e.g. synonymy tests). Given that a large fraction of the information available today, on the Web and elsewhere, consists of short text snippets (e.g. abstracts of scientific documents, imagine captions, product descriptions), in this paper we focus on measuring the semantic similarity of short texts. Through experiments performed on a paraphrase data set, we show that the semantic similarity method outperforms methods based on simple lexical matching, resulting in up to 13% error rate reduction with respect to the traditional vector-based similarity metric.

Gradient-based Novelty Detection Boosted by Self-supervised Binary Classification

Mahantesh

Accepted Paper and Poster

Posters session 1: Feb 24 @ 4:45pm-6:30pm PST

Poster session 2: Feb 28 @ 12:45am-2:30am PST

Authors: Jingbo Sun · Li Yang · Jiaxin Zhang · Frank Liu · Mahantesh Halappanavar · Deliang Fan · Yu Cao

Abstract: Novelty detection aims to automatically identify out-of-distribution (OOD) data, without any prior knowledge of them. It is a critical step in data monitoring, behavior analysis and other applications, helping enable continual learning in the field. Conventional methods of OOD detection perform multi-variate analysis on an ensemble of data or features, and usually resort to the supervision with OOD data to improve the accuracy. In reality, such supervision is impractical as one cannot anticipate the anomalous data. In this paper, we propose a novel, self-supervised approach that does not rely on any pre-defined OOD data: (1) The new method evaluates the Mahalanobis distance of the gradients between the in-distribution and OOD data. (2) It is assisted by a self-supervised binary classifier to guide the label selection to generate the gradients and maximize the Mahalanobis distance.

In the evaluation with multiple datasets, such as CIFAR-10, CIFAR-100, SVHN and TinyImageNet, the proposed approach consistently outperforms state-of-the-art supervised and unsupervised methods in the area under the receiver operating characteristic (AUROC) and area under the precision-recall curve (AUPR) metrics. We further demonstrate that this detector is able to accurately learn one OOD class in continual learning.

Research topics