PNNL @ NeurIPS 2020

Researchers to speak on topics ranging from
causal discovery, to thermal dynamics, to phishing

PNNL data scientists and engineers will be presenting eight papers and posters between NeurIPS, the Thirty Fourth Conference on Neural Information Processing Systems, and the co-located Women in Machine Learning workshop, WiML. Held virtually this year from December 6 through December 12, NeurIPS provides an international platform for the exchange of information related to neural information processing systems research.

NeurIPS header image

At PNNL, researchers are advancing the frontiers of scientific research and national security by applying artificial intelligence (AI) and advanced computing to scientific problems. 

Below is a full list of PNNL presenters who will share their research in AI and machine learning at NeurIPS and WiML.

Adversarial Evaluation of Binary Deception Classification

Women in Machine Learning Workshop

Poster | December 9

Authors: Ellyn Ayton, Maria Glenski

The development of state-of-the-art and novel neural networks for many natural language processing tasks often overlook the important and necessary need for evaluations of model susceptibility to linguistic adversarial attacks. This is a critical component to ensure model generalizability and robustness against not only intentionally polluted data inputs, but also to guarantee expected model performance for naturally evolving inputs, e.g., regional dialects and informal language or slang.

In this work, we present our adversarial evaluation on a difficult text, binary classification task – deceptive news detection across both Reddit and Twitter. Since the volume of falsified news on these social media platforms is continuously increasing, understanding the underlying behavior of the decision-making processes behind recently developed detection systems is critical. We present an analysis of expected model behavior under various adversarial conditions.

Using Large-Scale Language Models to Understand Psycho-Linguistic Dimensions of Phishing

Women in machine learning Workshop

Poster | December 9 | 3:00 p.m. PST 

NeurIPSstock

Authors: Kayla Duskin, Emily Saldanha, Dustin Arendt, Svitlana Volkova

The ubiquitous use of email for business, commerce, and personal communication has made our lives easier in many ways. But it has also led to widespread vulnerability to phishing attacks; this is especially critical during the global pandemic. Despite this large and ever-growing prevalence of phishing attacks, many institutions continue to use rule-based systems to filter emails that do not investigate the language of phishing and its appeals to human emotions and morals. To the best of our knowledge, there have been limited attempts to evaluate the feasibility of using recently emerged text generation models (e.g., GPT2) to generate phishing emails and analyze their limitations, strengths, and weaknesses in generating emotional and moral appeals and syntactic and psycho-linguistic properties of phishing texts.

In this work we curate a dataset of nearly 10,000 phishing emails by synthesizing three open-source phishing email datasets using a rigorous de-duplication process. To understand and analyze the language of phishing, we use that unique dataset to train both unconditioned and conditioned language models for phishing email generation. We demonstrate that large-scale language models (e.g., distilGPT2) are able to capture emotional and moral appeals of phishing, however, there is a lot of space for improvement. While there are inherent risks in creating a high-quality generator of phishing emails, we believe that the insights from the generator will inform more robust phishing detection models (encouraged by the positive results in fake-news detection) and, most importantly, allow to design effective training to reduce human susceptibility to phishing attacks.

A Proposed n-Dimensional Kolmogorov-Smirnov Distance

Workshop on Machine Learning and the Physical Sciences

Poster | December 11 

Authors: Alex Hagen, Jan Strube, Isabel Haide (Karlsruhe Institute of Technology), Shane Jackson, James Kahn (Karlsruhe Institute of Technology), Connor Hainje

We present an n dimensional test statistic inspired directly by the Kolmogorov-Smirnov (KS) test statistic and Press's extension of the KS test to two dimensions. We call this the ndKS statistic. To preclude the high computational cost associated with working in higher dimensions, we present an implementation using tensor primitives. This allows parallel computation on CPU or GPU. We explore the behavior of the test statistic in comparing two, three-dimensional samples and use a standard statistical method—the permutation method—to explore its significance. We show that, while the Kullback-Leibler divergence is a good choice for general distribution comparison, ndKS has properties that make it more desirable for surrogate model training and validation than Kullback-Leibler divergence.

Evaluation of Algorithm Selection and Ensemble Methods for Causal Discovery

Workshop on Causal Discovery & Causality-Inspired Machine Learning

Poster | December 11

Authors: Emily Saldanha, Robin CosbeyEllyn Ayton, Maria Glenski, Joseph Cottam, Karthik Shivaram, Brett Jefferson, Brian Hutchinson, Dustin Arendt, Svitlana Volkova

The discovery of causal structure from observational data is an important but challenging task for which many algorithms have been developed. Because each of these algorithms makes different assumptions about the underlying data and causal structure of the specific observational data, algorithms are not generalizable to perform well at recovering the true causal structure in different scenarios. However, there is no known method for selecting among the possible algorithms for a given scenario without having prior knowledge of the causal mechanisms that one aims to discover.

To address this issue, we explore two different approaches for leveraging multiple different algorithms for causal discovery. First, we explore several heuristics which compare the predictions of multiple algorithms in order to inform the selection of a single algorithm for the task. Second, we develop a novel causal ensemble method which combines the output of multiple algorithms into a single causal structure prediction. We evaluate the accuracy, generalizability, and stability of the algorithm selection and algorithm ensemble methods across a range of different graph structures and data sets.

HydroNet: Benchmark Tasks for Preserving Long-Range Interactions and Structural Motifs in Predictive and Generative Models for Molecular Data

Machine Learning and Physical Sciences Workshop

Paper and Poster | December 11 

AIpowergridsimulations

Authors: Sutanay Choudhury, Jenna Pope, Logan Ward (Argonne National Laboratory), Sotiris Xantheas, Ian Foster (Argonne National Laboratory), Joseph Heindel (University of Washington), Ben Blaiszik (University of Chicago)

Long-range interactions are central to phenomena as diverse as gene regulation, topological states of quantum materials, electrolyte transport in batteries, and the universal solvation properties of water. 

Here, we present a set of challenge problems for preserving long-range interactions and structural motifs in machine learning approaches to chemical problems using a recently published dataset of 4.95 million water clusters held together by long-range hydrogen-bonding interactions. Uniquely, the dataset gives spatial coordinates and two types of graph representations to accommodate a variety of machine learning practices.

Loosely Conditioned Emulation of Global Climate Models with Generative Adversarial Networks

Poster | December 11

Authors: Alex Ayala (Western Washington University), Chris Drazic (Western Washington University), Brian Hutchinson (Western Washington University), Benjamin Kravitz, and Claudia Tebaldi

Climate models encapsulate our best understanding of the Earth system, allowing research to be conducted on its future under alternative assumptions of how human-driven climate forces are going to evolve. An important application of climate models is to provide metrics of mean and extreme climate changes, particularly under these alternative future scenarios, as these quantities drive the impacts of climate on society and natural systems. Because of the need to explore a wide range of alternative scenarios and other sources of uncertainties in a computationally efficient manner, climate models can only take us so far, as they require significant computational resources, especially when attempting to characterize extreme events, which are rare and thus demand long and numerous simulations in order to accurately represent their changing statistics.

Here we use deep learning in a proof of concept that lays the foundation for emulating global climate model output for different scenarios. We train two "loosely conditioned" generative adversarial networks (GANs) that emulate daily precipitation output from a fully coupled Earth system model: one GAN modeling fall-winter behavior and the other spring-summer. Our GANs are trained to produce spatiotemporal samples: 32 days of precipitation over a 64x128 regular grid discretizing the globe. We evaluate the generator with a set of related performance metrics based upon Kullback–Leibler divergence and find the generated samples to be nearly as well matched to the test data as the validation data is to test. We also find the generated samples to accurately estimate the mean number of dry days and mean longest dry spell in the 32-day samples. Our trained GANs can rapidly generate numerous realizations at a vastly reduced computational expense, compared to large ensembles of climate models, which greatly aids in estimating the statistics of extreme events.

Physics-Constrained Deep Recurrent Neural Models of Building Thermal Dynamics

Workshop Tackling Climate Change with Machine Learning

Poster | December 11

Authors: Jan Drgona, Vikas Chandan, Draguna Vrabie, Aaron Tuor

We develop physics-constrained and control-oriented predictive deep learning models for the thermal dynamics of a real-world commercial office building.

The proposed method is based on the systematic encoding of physics-based prior knowledge into a structured recurrent neural architecture. Specifically, our model mimics the structure of the building thermal dynamics model and leverages penalty methods to model inequality constraints. Additionally, we use constrained matrix parameterization based on the Perron-Frobenius theorem to bound the eigenvalues of the learned network weights. We interpret the stable eigenvalues as dissipativeness of the learned building thermal model. We demonstrate the effectiveness of the proposed approach on a dataset obtained from an office building with 20 thermal zones.

Policy Convergence Under the Influence of Antagonistic Agents in Markov Games

The pre-registration experiment: an alternative publication model for machine learning research workshop

Contributed talk | December 11 

deeplearning

Authors: Chase Dowling, Nathan Hodas, Ted Fujimoto

We propose an empirical study of how an antagonistic agent can subvert centralized, multi-agent learning and prevent policy convergence in the context of Markov games. We hypothesize that recent results in differential Nash equilibria are candidate conditions for policy convergence when agents learn via policy gradient methods and motivate this line of inquiry by introducing antagonistic agents. We compare the performance of groups of agents with prescribed reward function forms with and without antagonistic opponents and measure policy convergence. This experiment aims to steer the research community towards a promising plan of attack in addressing open questions in Markov games.

Prototypical Region Proposal Networks for Few-Shot Localization and Classification

Poster | December 11

Authors: Aaron Tuor, Elliott Skomski, Court Corley, Henry Kvinge, Andrew Avila, Nathan Hodas, Zach New

Recently proposed few-shot image classification methods have generally focused on use cases where the objects to be classified are the central subject of images. Despite success on benchmark vision datasets aligned with this use case, these methods typically fail on use cases involving densely annotated, busy images. These images are common in the wild where objects of relevance are not the central subject; they appear potentially occluded, small, or among other incidental objects belonging to other classes of potential interest.

To localize relevant objects, we employ a prototype-based, few-shot segmentation model which compares the encoded features of unlabeled query images with support class centroids to produce region proposals indicating the presence and location of support set classes in a query image. These region proposals are then used as additional conditioning input to few-shot image classifiers. We develop a framework to unify the two stages (segmentation and classification) into an end-to-end classification model—PRoPnet—and empirically demonstrate that our methods improve accuracy on image datasets with natural scenes containing multiple object classes.