Seminars
Reading Group: Pure Mathematics for Machine Learning
Scientific Graph Laplacians
March 7
Stephen Young, Mathematician, PNNL
Roberto Gioiosa, Computer Scientist, PNNL
Recently, a new graph Laplacian, called the inner product Laplacian, was introduced, which generalizes many existing Laplacians, including the normalized and combinatorial Laplacian and their weighted variants. The key observation behind the inner product Laplacian is that by defining appropriate inner product spaces on the vertices and edges, the standard Laplacians can be recovered as Hodge Laplacians over the simplicial complex formed by the edges and vertices. These inner product spaces form a natural way to incorporate non-combinatorial information into the definition of a domain-specific Laplacian. In contrast to current domain-specific weighting schemes that rely solely on edge weights, information regarding the similarity of non-adjacent vertices and arbitrary pairs of edges can be effectively incorporated into the Laplacian.
To illustrate this approach, we consider the problem of calculating the potential energy of an atomistic configuration using graph neural networks. In comparison with state-of-the-art approaches such as SchNet, our approach replaces a learned (via auto-encoder) representation of the atom types with an inner product space on atoms based on scientific knowledge (e.g., electronegativity). A key computational step in using the inner product Laplacian within a graph neural network is a series of sparse-dense matrix multiplications. We illustrate how this sparse-dense matrix product can be significantly accelerated.
ARES: Assurance of Reasoning Enabled Systems
September 5
Andres Marquez, Computer Scientist, PNNL
ARES seeks to build a machine-learning based, low-overhead, multi-granular reverse-engineering cyber-forensics platform, following approaches:
- Data analysis: As an exemplar, targeting image manipulation via light forensics.
- Object code analysis: As an exemplar, targeting abstraction lift, optionally boosted by classical decompilation techniques, to pave the way for security analysis, code optimization, and machine retargeting in a rapidly evolving computing landscape.
Deep Learning as a Tool to Advance Pure Mathematics
February 29, 2024 | Watch on YouTube
Henry Kvinge, Data Scientist, PNNL
While advances in artificial intelligence have already begun to have a substantial impact on the natural sciences (e.g., AlphaFold), applications to mathematics have remained limited. This is despite the fact that searching through large datasets for patterns is a regular activity for many research mathematicians. In this talk we consider the use of machine learning, particularly deep learning, as a tool for accelerating research in algebraic combinatorics, an area of mathematics focused on enumerating and describing discrete structures such as permutations, posets, and partitions. We describe progress on developing a set of benchmark datasets that represent open problems in the field and describe a case study of how deep learning can be incorporated into a mathematician’s research workflow. Finally, we speculate how data generated through sophisticated mathematical algorithms offers a unique opportunity to study how and why deep learning works in a controlled setting.
Optimizing a Machine Learning Algorithm for Time Series Prediction
February 8, 2024
Rogene Eichler West, Senior Data Scientist, PNNL
Reservoir computing (RC) has demonstrated strengths in predicting future time steps in chaotic time series. This project sought to better understand how to construct models to best predict a class of fast-slow dynamical systems that exhibit critical transitions, such as the driven double well potential. It was also of interest to understand whether such a reservoir could be represented in the higher dimensional space of a Koopman operator, thereby lending itself to linear analysis and control theory. The path to insight included an exploration of methods to separate fast-slow dynamics, patterns of sign persistence in driving terms that predict ensuing transitions, graphs metrics associated with well performing reservoirs, and hyperparameter optimization.
Understanding and Intervening: Making Deep Learning Systems Robust with Interpretability Tools
January 25, 2024
April 18, 2024 | Watch on YouTube
Davis Brown, Data Scientist, PNNL
Deep learning models must be updated to succeed in changing environments. However, trends toward increasingly general models create a critical technology gap, where it is prohibitively costly to both collect new data and to use standard retraining procedures. For this reason, there is growing interest in computationally inexpensive and interpretable error patching for deep learning models. We use tools from interpretability to discover generalization failures in deep learning models with orders-of-magnitude less data (and 100 to 1000x less data curation effort) and efficiently rewrite deep learning models with a class of substantially more robust model editing methods that use weight-space ensembling. We provide evidence via case studies and controlled experiments that these methods generalize across models, modalities, and applications.
Mega AI – Scaling AI for Science and Security Missions with topic on targeted foundational model development
January 11, 2024 | Watch on YouTube
Maria Glenski, Senior Data Scientist, PNNL
A proliferation of large language models (the GPT series, BLOOM, LLaMA series, and more) are driving forward novel development of multipurpose artificial intelligence (AI) for a variety of tasks, particularly natural language processing tasks. These models demonstrate strong performance on a range of tasks. However, there has been evidence of brittleness when applied to more niche or narrow domains where hallucinations or fluent but incorrect responses reduce performance. When applying foundation models trained on general purpose datasets to science and security domains, we often see a degradation in performance (e.g., handling vocabulary shift between general language versus in-domain knowledge/language). The Mega AI project explores the tradeoffs and benefits of targeted development for science and security mission applications from domain-pretraining from scratch, fine-tuning, and more.
Black Lightning: Human Factors in Applied Machine Learning
December 14, 2023
Jessica Baweja, Applied Research Psychologist, PNNL
Black Lightning began in 2021, with the goal of applying human factors methods and research to learn how to better integrate machine learning (ML) tools into power grid operations—a historically challenging context. Over the past three years, the team has conducted applied research to understand and address the factors that influence trust, reliability, and use of AI and ML. In this presentation, I will discuss the results of four different studies in which we measure trust of AI/ML by power systems operators compared to traditional tools, assessed whether feature visualizations are interpretable by human users, explored different methods for mitigating the impact of errors on trust of AI/ML, and examined ways to evaluate the human-readiness of AI/ML. The results from these studies demonstrate that evaluation of trust of AI/ML will help to develop and design training for users and stakeholders to support appropriate use. In addition, frameworks for ethical, safe, and responsible use of AI/ML—especially generative AI—will help to manage the use of AI/ML appropriately with human considerations in mind. Overall, the results show some ways that human factors research will create more human-ready AI from early data science research to deployment.
Invertible Temper Modeling using Normalizing Flows
November 30, 2023
Tegan Emerson, Data Scientist, PNNL
Advanced manufacturing research and development is typically undertaken on small scales, owing to the expense of experiments and nascent development of physical simulations for these novel processes. Deep learning has been used to model visually plausible microstructures but has not been used to understand how microstructures are affected by heat treatment. We propose to address this gap by using invertible neural networks to holistically model the effects of heat treatment, e.g., tempering. We apply the developed model to scanning electron microscope imagery from shear-assisted processing and extrusion manufacturing.
We find that this approach produces preserves information regarding a sample's material properties or experimental process parameters under simulated (de)tempering. We also show that topological data analysis can be used to stabilize model training and improve downstream results. We assess directions for future work and identify our approach as a step towards a holistic, end-to-end AI-enabled system for accelerating advanced manufacturing research and development.
Next-Generation Reservoir Computing, and on Explaining the Surprising Success of a Random Neural Network for Forecasting Chaos
June 28, 2023
Erik Bollt, Professor, Clarkson University
ML has become a widely popular and successful paradigm, including for data-driven science. A major application is forecasting complex dynamical systems. Artificial neural networks have evolved as a clear leading approach, and recurrent neural networks are especially well suited. RC have emerged for simplicity and computational advantages. Instead of a fully trained network, an RC trains only read-out weights. However, perhaps why and how an RC works at all, despite randomly selected weights, is the surprise. We explicitly connect an RC with linear activation and linear read-out to well-developed time-series literature on vector autoregressive averages (VAR), which already perform well for short term forecasts.
Thus, also follows existence of the representation by the WOLD theorem. Even better, with a random network, linear activation, and polynomial read-out RC, we explicitly connect to a nonlinear VAR. This leads us to introduce a new best data-driven forecasting method that we call next-generation reservoir computing, NG-RC. Further, we connect this random neural network approach to the now widely popular dynamic mode decomposition. Thus, these three are in a sense different faces of the same concept.
Machine Reasoning for Scientific Discovery
June 7, 2023
Robert Rallo, Director of the Advanced Computing, Mathematics, and Data Division, PNNL
The success of ML has transformed scientific discovery and provided a new perspective on the value of data and knowledge. However, current approaches still result in domain-agnostic ML applications that require a significant amount of data and only offer reliable predictions when operating close to their training data regime. Generating the amount and quality of data needed to develop reliable ML models to address the scientific challenges of interest to the Department of Energy is difficult, costly, and sometimes infeasible. Even when enough data and domain knowledge are available, current ML approaches are limited to pattern identification and lack the ability to provide new insights into the modeled systems. Obtaining reliable predictions, with low latency, from limited data and incomplete knowledge, constitutes a grand challenge in AI that cannot be achieved with current ML capabilities.
Addressing this challenge requires co-designing machine reasoning algorithms and heterogeneous hardware systems capable of interacting with scientific instruments and simulations to generate data in a rational way to formulate new physical models and theoretical insight. In this presentation we will provide an overview of the Laboratory Objective on Machine Reasoning for Scientific Discovery, including its vision, goals, and implementation details.
The Evolving Role of AI in Astrophysical Surveys
May 17, 2023
Gautham Narayan, Professor, University of Illinois at Urbana-Champaign
As the scale of astrophysical experiments has increased dramatically, so has the need to process large volumes of data in real-time. The astrophysics community has begun using AI extensively for tasks, such as classification, prediction, emulation, and inference. We discuss how on-going and upcoming surveys are using these techniques for tasks such as photometric redshift estimation, identifying rapidly evolving transient sources in the time-domain such as kilonovae and supernovae, and using such sources to infer the composition of our Universe and the nature of dark energy.
Analyzing Empirical Kernels drawn from Neural Networks for Regression
May 3, 2023
Saad Qadeer, Postdoctoral Researcher, PNNL
Deep neural networks (DNNs) have shown immense promise in performing a variety of learning tasks including, among others, image classification, natural language processing, function approximation, solving differential equations, etc. However, a theoretical understanding of their effectiveness and limitations has so far eluded practitioners, partly due to the inability to determine the closed forms of the learned functions. This makes it harder to assess their precise dependence on the training data and to study their generalization properties on test sets. Recent work has shown that randomly initialized DNNs in the infinite width limit converge to kernel machines relying on a neural tangent kernel (NTK) with known closed form. These results suggest, and experimental evidence corroborates, that empirical kernel machines can also act as surrogates for finite width DNNs. However, the high computational cost of assembling the full NTK makes this approach infeasible in practice, motivating the need for low-cost approximations.
In this talk, we discuss the performance of the conjugate kernel (CK), an approximation to the NTK that has been observed to yield fairly similar results. For the regression problem of smooth functions, we prove that the CK approximations are only marginally worse than the NTK approximations and, in some cases, are much superior. In particular, we establish bounds for the relative errors, verify them with numerical tests, and identify the regularity of the kernel as the key determinant of performance. In addition to providing a theoretical grounding for using CKs instead of NTKs, our framework suggests a recipe for accelerating DNN accuracy inexpensively and provides insights into understanding the robustness of the various approximants.
Constrained-Aware Machine Learning
April 26, 2023
Ferdinando Fioretto, Professor, Syracuse University
In recent years, the integration of ML with challenging scientific and engineering problems has experienced remarkable growth. In particular, deep learning has proven to be an effective solution for unconstrained problem settings, but it has struggled to perform as well in domains where hard constraints, domain knowledge, or physical principles need to be taken into account. In areas such as power systems, materials science, and fluid dynamics, the data follows well-established physical laws, and ignoring these laws can result in unreliable and ineffective solutions.
In this talk, we delve into the need for constraint-aware ML. We present how to integrate key constrained optimization principles within the training process of deep learning models, endowing them with the capability of handling hard constraints and physical principles. The resulting models will bring a new level of accuracy and efficiency to hard decision tasks, which will be showcased on energy and scheduling problems. We then introduce a powerful integration of constrained optimization as neural network layers, resulting in ML models that are able to enforce structure in the outputs of learned embeddings. This integration will provide ML models with enhanced expressiveness and modeling ability, which will be showcased through the certification fairness in learning to rank tasks and the assembly of high-quality ensemble models. Finally, we discuss a number of grand challenges that I plan to address to develop a potentially transformative technology for both optimization and ML.
Learning Invariant Author Representations
April 5, 2023
Nicholas Andrews, Research Scientist, Johns Hopkins University
Authorship attribution research has a long history in many different communities, including computer science, forensic linguistics, historical linguistics, and statistics. In this talk, we discuss recent findings on learning invariant representations of authors that generalize over time. We show that this enables authorship attribution even in particularly challenging conditions, such as when writing samples are sampled years apart. Compared to prior work, our approach is distinguished by a focus on "needle in the haystack" evaluations, in which there may be hundreds of thousands of candidate authors, with authors who may be novel relative to the training data, and where only small writing samples are available.
In these conditions, we find that large-scale contrastive training of attention-based neural networks outperforms alternate approaches, with a data augmentation scheme that is designed to encourage temporal invariance of the learned representations. Equipped with highly discriminative author representations, we then discuss some applications, including recent work in style-controlled text generation using large language models.
Human-AI Collaboration Enables More Empathic Conversations in Mental Health Support
February 22, 2023
Tim Althoff, Assistant Professor, University of Washington
Online social media have connected people across the world in unprecedented ways—but what creates a meaningful connection? In this talk, I will present recent work on computational approaches to model, understand, and facilitate more empathic conversations in online mental health platforms. We apply this model to analyze 235,000 mental health interactions and show that while online mental health support users appreciate empathy, they do not improve, or self-learn empathy over time. To facilitate higher expression of empathy, we developed a language generation task and model for “empathic rewriting,” in which a model provides suggestions to transform low-empathy conversational posts. Through a combination of automatic and human evaluation, we demonstrate that our reinforcement learning-based model successfully generates more empathic, specific, and diverse responses and outperforms the empathy increase of natural language processing approaches. These findings demonstrate the potential of feedback-driven, AI-in-the-loop writing systems to empower humans in open-ended, social, creative, and empathic conversations.
How will we do Mathematics in 2030?
January 18, 2023
Michael Douglas, Senior Research Scientist, Harvard University
We make the case that over the coming decade, computer assisted reasoning will become far more widely used in the mathematical sciences. This includes interactive and automatic theorem verification, symbolic algebra, and emerging technologies such as formal knowledge repositories, semantic search, and intelligent textbooks. After a short review of the state-of-the-art, we survey directions where we expect progress, such as mathematical search and formal abstracts, developments in computational mathematics, integration of computation into textbooks, and organizing and verifying large calculations and proofs. For each we try to identify the barriers and potential solutions.
Trusting AI-enabled Systems: New Frontiers for Research
December 7, 2022
Erin Chiou, Assistant Professor of Human Systems Engineering, Arizona State University
Trust has been of interest among systems researchers and designers for decades because of its ability to explain, at least partially, human behaviors with technology. However, looking toward a future of complex, organizations tasked with operating in rapidly evolving, information-driven landscapes, the successes of those organizations will rely on their ability to gather, select, assess, decide, coordinate, and implement information quickly, across distributed units, and at scale. Applying the team science lens to the study of human-automation interaction has led to a more relational, interactive framing of trusting autonomous technology. This relational framing can be used to study how interacting entities develop trusting relationships and demands clarification from current and future trust research. This presentation will elaborate on a few of these needed clarifications and will provide some examples of how this new framing of trust can be useful.
Tractometry: Peering into the Connections of the Living Human Brain
July 21, 2022
Ariel Rokem, Research Associate Professor, University of Washington
The white matter of the brain contains the long-range connections between distant cortical regions. The integration of brain activity through these connections is important for information processing and for brain health. Diffusion-weighted MRI provides estimates of the trajectories of these connections in the human brain in vivo and assesses their physical properties.
In this talk, I will present a set of open-source software tools that automatically infer these trajectories and delineate major anatomical pathways, providing estimates of the tissue properties along their length. These tissue properties serve as the input to statistical models that learn to predict relationship between brain connection tissue properties and brain function.
MARS Human Factors Panel Discussion
May 24, 2022
Katya Le Blanc, Idaho National Laboratory
Jane Pinelis, Joint Artificial Intelligence Center
Brett Borghetti, Air Force Institute of Technology
Are there questions around human factors that you wished you had answers to? When you work with domain experts to construct ML tools, how do you know you're asking the right questions? Will the potential users of your ML aids know how to appropriately use them? Human factors play a major role in reasoning by centering on questions around WHO will use vs. develop vs. be impacted by the tools, WHAT is the objective of the tools in a larger workflow, WHERE in the development process is the tool, and HOW is the tool intended to be used. Here's your chance to understand the more human side of deployable AI and ML!
What is A Quantum Neural Network?
May 16, 2022
Carlos Ortiz Marrero, Data Scientist, PNNL
The promise of the quantum ML community is that by incorporating quantum effects into ML models, researchers can improve model performance and better understand more complex datasets. Quantum neural networks (QNNs), a promising framework for creating quantum algorithms, promise to achieve such a hope by combining the speedups of quantum computation with the widespread success of deep learning techniques. Despite the optimism, training QNNs has proven to be a technically difficult task because of a concentration of measure phenomenon, known as barren plateaus, that leads to exponentially small gradients as we scale the number of neurons of our model.
During this talk, we will introduce the notion of a QNN, discuss use cases of these models and the progress the community has made at making these models more tractable. We will highlight some key insight our team has made—namely that barren plateaus can also occur because of an excess of entanglement between visible and hidden units in QNN models—and discuss the generic approach we developed to train a QNN without suffering from barren plateaus. We will discuss the advantages and challenges of our method, as well as some steps we are taking to further improve our training routine approaches towards the differentiation of homologues with unsupervised classification methods.
Artificial Intelligence Methods for Extracting Materials Knowledge from Microstructural Images
April 19, 2022
Elizabeth Holm, Professor of Materials Science & Engineering, Carnegie Mellon University
Microstructural images encode rich data sets that contain information about the structure, processing, and properties of the parent material. As such, they are amenable to characterization and analysis by data science approaches, including computer vision (CV) and ML. In fact, they offer certain advantages compared to natural images, often requiring smaller training data sets and enabling more thorough assessment of results. Because CV and ML methods can be trained to reproduce human visual judgments, they can perform qualitative and quantitative characterization of complex microstructures, including segmentation, measurement, classification, and visual similarity tasks, in an objective, repeatable, and indefatigable manner. In addition, we can apply these approaches to develop new characterization techniques that capitalize on the unique capabilities of computers to capture additional information compared to traditional metrics. Finally, ML can learn to associate microstructural features with materials processing or property metadata, providing physical insight into phenomena such as strength and failure.
Expressivity and Learnability: Linear Regions in Deep ReLU Networks
April 11, 2022
David Rolnick, Assistant Professor, McGill University
In this talk, we show that there is a large gap between the maximum complexity of the functions that a neural network can express and the expected complexity of the functions that it learns in practice. Deep ReLU networks are piecewise linear functions, and the number of distinct linear regions is a natural measure of their expressivity. It is well-known that the maximum number of linear regions grows exponentially with the depth of the network, and this has often been used to explain the success of deeper networks. We show that the expected number of linear regions in fact grows polynomially in the size of the network, far below the exponential upper bound and independent of the depth of the network. This statement holds true both at initialization and after training, under natural assumptions for gradient-based learning algorithms. We also show that the linear regions of a ReLU network reveal information about the network's parameters. In particular, it is possible to reverse-engineer the weights and architecture of an unknown deep ReLU network merely by querying it.
Automated Red Teaming (ART) for Cyber-Physical Systems
March 29, 2022
Arnab Bhattacharya, Operations Research Scientist, PNNL
The convergence of physical and cybersecurity processes and increasing integration of cyber-physical systems (CPS) with business and internet-based applications has increased the prevalence and complexity of cyber threats. Red teaming for CPS involves a group of security experts to emulate end-to-end cyber-based attacks following a set of realistic tactics, techniques, and procedures. However, current red teaming exercises requires a high degree of skill to draft potential adversarial scenarios, which can be time consuming and costly to test in large-scale CPS with complex operational dynamics, vulnerabilities, and uncertainties. In this talk, we will present an automated, domain-aware, AI-assisted offensive attack emulation method for CPS without any human in the loop. In our setup, the adversarial agent interacts with a high-fidelity, simulated CPS environment and learns an attack plan based on the current and future consequence of its actions on system performance. Our offline data-driven learning procedure leverages reinforcement learning to systematically exploit both cyber and physical system vulnerabilities modeled using hybrid-attack graphs. Next, we will discuss how the reinforcement learning-based red-teaming module is used within a hierarchical optimization framework to computationally solve a nonzero sum game between an adversary and a defender to solve a resilience-by-design network hardening problem. Our approach will be demonstrated on different sensor spoofing and ransomware attack graphs targeting building systems with available high and low fidelity simulation environments developed at PNNL.
Regularization, Adversarial Learning, and the Calculus of Variations
February 15, 2022
Ryan Murray, Assistant Professor, North Carolina State University
Although ML algorithms have proven successful at many tasks, specific aspects of their operation lack transparency. In particular, the introduction of regularization terms or adversaries into learning algorithms empirically improves performance, but rigorous, quantifiable explanations of the effect of these practices are often lacking.
This talk will discuss a body of work linking regularization and adversarial training with the mathematical study of the calculus of variations and differential equations. This link includes surprising connections with classical equations describing the formation and motion of material interfaces in physics, such as mean curvature flow and minimal surfaces. This type of work provides rigorous rules for hyperparameter selection; analytical means for quantifying the expected bias induced by these methods, especially in empirical and graph-based settings; and suggests new avenues for algorithm development.
Introducing the DeepDataProfiler for Interpreting Deep Neural Networks
February 10, 2022
Brenda Praggastis, Data Scientist, PNNL
There seems to be an inverse relationship between the accuracy of deep neural classification networks (DNNs) and our human ability to understand how they make their decisions and verify their reliability. The more reliable and generalizable a model, the more impenetrable its internal black box seems to be. But the need to be able to interpret the reasoning of a DNN, to answer the questions “Why does this work?” and “How did it know?” and to be able to unambiguously explain to someone the causal connections made from input to classification is paramount if DNNs are to be relied upon to inform our decisions.
In this talk we will discuss how DeepDataProfiler (DDP) profiles a DNN using the singular value decomposition of its weight tensors to decompose the activation space into invariant subspaces associated to model-defined features. DDP uses hypergraph theory to identify the most influential of these features and the causal connections between layers. It uses feature optimization to ground these features in the input domain. We will describe the DDP pipeline and apply it to two trained image classification networks, a benchmark example with easily understood data inputs and a more complex example arising out of nuclear forensics.
Artificial Judgment Assistance from teXt: Applying Open Domain Question Answering to Nuclear Nonproliferation Analysis
January 20, 2022
Megha Subramanian and Alejandro Zuniga, Data Scientists, PNNL
Nuclear nonproliferation analysis is complex and subjective, as the data is sparse and examples are rare and diverse. While analyzing nonproliferation data, it's desired that the findings be auditable so any claim or assertion can be sourced directly to the reference material. Currently, analysts accomplish this by thoroughly documenting underlying assumptions and referencing details to source documents. This is a labor-intensive and time-consuming process that can be difficult to scale with geometrically increasing quantities of data.
In this talk, we describe an approach to leverage bidirectional-language models for nuclear nonproliferation analysis. It's been shown that these models not only capture language syntax but also some of the relational knowledge present in the training data. We've devised a unique salt-and-pepper strategy for testing the knowledge present in the language models, while also introducing auditability function in our pipeline. We demonstrate that fine-tuning the bidirectional-language models on domain-specific corpus improves the model's ability to answer domain-specific factoid questions. We hope the results will further the natural-language processing field by introducing the ability to audit the answers provided by the language models to bring forward the source of the knowledge.
Structures via Reasoning: Applying AI to Cryo-EM to Reveal Structural Variability
January 13, 2022
Doo Nam Kim, Computational Scientist, and James Evans, Chemist, PNNL
Cryo-electron microscopy (cryo-EM) is a powerful technology in structural biology capable of characterizing small biomolecules to large complexes with many interacting proteins with atomic to sub-nanometer resolution. Small internal variances in structure are critical to inferring the function of proteins. The latest cryo-EM reconstruction and meling approaches, however, often fail in resolving subclasses within macromolecular populations from the derived 2D images. Instead, current cryo-EM reconstruction algorithms can generally identify highly distinct biomolecule structures, and so they are still unable to fully harness the potential to understand intricate biological functions that result from subtle morphological differences. Comparing between a known atomic structure and a 3D inferred model refined by an AI method will reveal intrinsic local differences in the structure overcoming the experimental noise barrier.
Our work is developing techniques to overcome this limitation using deep learning, thereby helping with the identification of subtle variation within the protein structure. We will present our deep learning approaches to differentiate homologous structures that are distinguishable by small variances in contrast. Our results show that we can successfully use supervised classification to identify subtly different homologues. We will also present results and discuss approaches towards the differentiation of homologues with unsupervised classification methods.
Towards Generalizable User Authentication Systems on Personal Devices
November 17, 2022
Tempestt Neal, Assistant Professor, University of South Florida
Abstract popularity in remote work and online learning has increased in recent years due to the promotion of health safety, modernized work policies, self-paced learning of employer degree programs, among other reasons. These trends place significant responsibility on the computing industry to support internet users across demographic groups, particularly on personal computing devices (PCDs). This is especially important as some demographics impact users’ ability to recognize and mitigate security and privacy risks. Since user authentication systems are generally the first security checkpoint when accessing a PCD, they play a critical role as one of many precautionary measures of an entire computing experience.
User authentication is often a prerequisite for allowing user access to resources in a system. There are three common authentication methods: knowledge-based, token-based, and biometrics. How all three methods support users across various demographic groups has not been well studied.
In this talk, we provide fundamental understanding of user authentication systems and discuss prior work of the Cyber Identity and Behavior Research Lab. These efforts include various approaches to mobile biometrics, including user recognition, soft biometric classification, and detection of adversary attacks in mobile biometric authentication systems. We then overview ongoing research specifically concerning age-aware user authentication systems for PCDs. We conclude with a brief discussion of additional research on user identity and IoT devices and discuss open research challenges of interest to the lab.
Color Semantics for Visual Communication
September 22, 2021
Gerald Arthur Matthews, Professor of Psychology, George Mason University
Autonomous systems introduce novel sources of workload for human operators, including cognitive, social, and self-regulative demands. Failure to manage workload may damage trust and impair teaming with the system. The key to managing the demands of teaming may be the activation of a mental model that supports constructive engagement with autonomy. I will review recent studies of individual differences in the human’s dominant mental model and their impact on trust, conducted in collaboration with the Air Force Research Laboratory.
One study developed an assessment of the mental model applied to working with robots in security contexts. A second study utilized an immersive simulation of a security operation to explore how the mental model interacted with cues to a threat to influence trust in the robot. A further study is planned to test methods for enhancing trust in high-pressure settings in which human and robot threat evaluations may disagree. Study findings can inform system design for trust optimization.
Color Semantics for Visual Communication
July 28, 2021
Karen B. Schloss, Assistant Professor, University of Wisconsin-Madison
What do the following tasks have in common: organizing recyclables using colored bins, analyzing neuroimaging data using brain maps, and thinking through logic using Venn diagrams? All of these tasks require visual reasoning, in which people infer meaning from visual features, and use those inferences to make judgments about the world. Previous work on visual reasoning has primarily focused on visuospatial relations, suggesting that surface properties like color are less useful, and even harmful for visual reasoning. However, my research suggests people use a process called assignment inference, which makes visual reasoning through color far more robust than previously thought.
I will discuss recent empirical evidence and modeling that supports this position, and unite this work under a new framework, called Global Assignment Framework. Understanding visual reasoning has both theoretical and practical implications. From a theoretical perspective, it provides insight into how people for conceptual representations from perceptual input. From a practical perspective, it well help create information visualizations that make visual communication more effective and efficient.
Learning Long-Timescale Behavior from Short Trajectory Data
June 23, 2021
Jonathan Q. Weare, Professor, New York University
Events that occur on very long timescales are often the most interesting features of complicated dynamical systems. Even when we are able to reach the required timescales with a sufficiently accurate computer simulation, the resulting high-dimensional data is difficult to mine for useful information about the event of interest. Over the past two decades, substantial progress has been made in the development of tools aimed at turning trajectory data into useful understanding of long-timescale processes. I will begin by describing one of the most popular of these tools, the variational approach to conformational dynamics (VAC).
VAC involves approximating the eigenfunctions corresponding to large eigenvalues of the transition operator of the Markov process under study. These eigenfunctions encode the most slowly decorrelating functions of the system. I will describe our efforts to close significant gaps in the mathematical understanding of VAC error. A second part of the talk will focus on a family of methods very closely related to VAC that aim to compute predictions of specific long timescale phenomena (i.e., rare events) using only relatively short trajectory data (e.g., much shorter than the return time of the event). I will close by presenting a few questions for future numerical analysis.
Environmental Context Prediction for Lower Limb Prostheses with Uncertainty Quantification
May 19, 2021
Edgar J. Lobaton, Professor, North Carolina State University
Reliable environmental context prediction is critical for wearable robots (e.g., prostheses and exoskeletons) to assist terrain-adaptive locomotion. This presentation proposes a novel vision-based context prediction framework for lower limb prostheses to simultaneously predict humans' environmental context for multiple forecast windows. By leveraging Bayesian Neural Networks, our framework can quantify the uncertainty caused by different factors (e.g., observation noise, insufficient or biased training) and produce a calibrated predicted probability for online decision-making. We compared two wearable camera locations (a pair of glasses and a lower limb device), independently and conjointly.
We utilized the calibrated predicted probability for online decision-making and fusion. We demonstrated how to interpret deep neural networks with uncertainty measures and how to improve the algorithms based on the uncertainty analysis. We implemented and optimized our framework for embedded hardware and evaluate the real-time inference accuracy and computational efficiency. The results in this study may lead to novel context recognition strategies in reliable decision-making, efficient sensor fusion, and improved intelligent system design in various applications.
Dancing with Algorithms: Interactive Machine Learning for C2
April 21, 2021
Robert S. Gutzwiller, Professor, Arizona State University
In this talk, I will explain the origin and execution of a variant of ML used to develop more trustworthy control of unmanned vehicle area search behaviors. ML typically lacks interaction with the user; novel interactive ML techniques incorporate user feedback, enabling observation of emerging ML behaviors and human collaboration during ML of a task. This may enable trust and recognition of these algorithms by end users—an initial experiment with human participants reveals several interesting findings that support the idea. This is the first extension of this technique for vehicle behavior model development targeting user satisfaction and is unique in its multifaceted evaluation of how users perceived, trusted, and implemented these learned controllers.
Tensor-Structured Dictionary Learning: Theory and Algorithms
March 24, 2021
Anand Sarwate, Associate Professor, Rutgers University
Existing and emerging sensing technologies produce multidimensional, or tensor data. The traditional approach to handling such data is to “flatten”, or vectorize, the data. It is well-known that this ignores the multidimensional structure and that this structure can be exploited to improve performance.
In this talk we study the problem of dictionary learning for tensor data, where we want to learn a set of tensor-structured signals that can represent a data set such that each data point is a sparse linear combination of tensors in the dictionary. A naïve approach to this problem would fail due to the large number of parameters to estimate in the dictionary. However, by looking at dictionaries that admit a low rank factorization, the problem becomes tractable. We characterize how hard the statistical problem of estimating these dictionaries is and provide novel algorithms for learning them. This work shows that low-rank models for tensors can be learned even when the amount of training data is limited.
Model-Based, Closed-loop and Autonomous Materials Development
January 27, 2021
Kris Reyes, Professor, University of Buffalo
Closed-loop, sequential learning is a key paradigm in autonomous materials development. Within this framework, aspects of the materials system under study are modeled, and such models are used to decide subsequent experiments to be run—results of which are fed back to update models. In the past, off-the-shelf solutions and algorithms have been employed to optimize material properties.
In this talk, we describe how autonomous materials platforms can be used for problems other than materials property optimization and for experimental processes with complex decision-making structure. We will specifically highlight work on autonomous phase mapping and real-time control and how a common framework can be used to capture problem-specific features and objectives. We also discuss the use of physical models within the closed loop to perform model-based, scientific reinforcement learning, allowing us to both leverage and gain mechanistic physical insight on the material system under study while simultaneously optimizing its properties.
Visually Exploring the Shape of Activations in Deep Learning
January 20, 2021
Bei Wang Phillips, Associate Professor, University of Utah
To understand how such performance is achieved, we probe a trained deep neural network by studying neuron activations, i.e., combinations of neuron firings at various layers of the network in response to a particular input. With a large number of inputs, we aim to obtain a global view of what neurons detect by studying their activations. We develop visualizations that show the shape of the activation space, the organizational principle behind neuron activations, and the relationships of these activations within a layer. Applying tools from topological data analysis, we present TopoAct, a visual exploration system to study topological summaries of activation vectors. We present exploration scenarios using TopoAct that provide valuable insights into learned representations of neural networks. We expect TopoAct to give a topological perspective that enriches the current toolbox of neural network analysis and provide a basis for network architecture diagnosis and data anomaly detection.
Data-driven Discovery of Physics: When Deep Learning Meets Symbolic Reasoning
December 16, 2020
Hao Sun, Assistant Professor, Northeastern University
Harnessing data to model and discover complex physical systems has become a critical scientific problem in many science and engineering areas. The state-of-the-art advances of AI (in particular deep learning thanks to its rich representations for learning complex nonlinear functions) have great potential to tackle this challenge, but in general rely on a large amount of rich data to train a robust model, have generalization/extrapolation issues, and lack of interpretability and explainability, with little physical meaning. To bridge the knowledge gaps between AL and complex physical systems in the sparse/small data regime, this talk will introduce the integration of bottom-up (data-driven) and top-down (physics-based) processes through a physics-informed learning and reasoning paradigm for discovery of discrete and continuous dynamical systems.
In particular, this talk we will discuss several methods that fuse deep learning and symbolic reasoning for data-driven discovery of mathematical equations (e.g., nonlinear ODEs/PDEs) that govern the behavior of complex physical systems, e.g., chaotic systems, reaction-diffusion processes, wave propagation, fluid flows, etc.
Machine Learning for the Real World: Conservative Extrapolation Under Domain Shift
October 28, 2020
Anqi (Angie) Liu, Postdoctoral Fellow at the California Institute of Technology
The unprecedented prediction accuracy of modern ML beckons for its application in a wide range of real-world applications, including autonomous robots, medical decision making, scientific experiment design, and many others. A key challenge in such real-world applications is that the test cases are not well represented by the pre-collected training data. To properly leverage learning in such domains, especially safety-critical ones, we must go beyond the conventional learning paradigm of maximizing average prediction accuracy with generalization guarantees that rely on strong distributional relationships between training and test examples.
In this talk, we will describe a robust learning framework that offers rigorous extrapolation guarantees under data distribution shift. This framework yields appropriately conservative yet still accurate predictions to guide real-world decision making and is easily integrated with modern deep learning. The practicality of this framework in an application on agile robotic control will be showcased. A survey of other applications as well as directions for future work will be discussed.
Exploring Graph Data with Why-provenance
September 23, 2020
Yinghui Wu, Assistant Professor, Case Western Reserve University
Collecting and processing data provenance (information describing the production process of data) is important in emerging applications, such as exploratory search and explainable AI. In particular, why-provenance aims to identify the components of queries and data that are responsible for the results of interests (e.g., unexpected or missing answers).
In this talk, we will introduce recent work on extending why-provenance to help users explore graph data. We characterize why-provenance for graph queries as a class of "why" questions ("why is some data is included/missing in the result?"). We approach why-provenance with query rewriting based on a formal provenance structure annotated with query operators. We present feasible solutions to derive queries that lead to desirable answers. We also discuss data provenance approaches for graphs (e.g., constraint-based explanation). We showcase our solutions with applications in tracking ML, power event analysis, and cybersecurity.
Integrated Data Analytics Scheme to Enable Data-Driven Decision-Making in Clean Energy and Cyber Manufacturing
September 21, 2020
Krystel Castillo, Greenstar Endowed Professor and Director of the Texas Sustainable Energy Research Institute, University of Texas at San Antonio
Big data analytics and novel optimization algorithms are essential to expand the frontiers of basic and applied research as well as to support critical decision-making. In order to accelerate the rate of innovation and discovery of relevant application areas in clean energy and defense manufacturing, an integrated theoretical and computational scheme that links experimental and field data, data-driven models, and optimization algorithms has been developed and is being integrated into a science-as-service platform. The alpha version of a cloud-based decision support system aims to facilitate the acquisition and creation of experimental and field data as well as computational models.
Towards Resilient Machine Learning in Adversarial Environments
August 26, 2020
Alina Oprea, Associate Professor, Northeastern University
ML is increasingly being used for automated decisions in applications, such as health care, finance, and cyber security. In these critical environments, attackers have strong incentives to manipulate the results and models generated by ML algorithms. The area of adversarial ML studies the effect of adversarial attacks against ML models and aims to design robust defense algorithms.
In this talk we will describe her work applying ML to detect advanced adversaries in cyber networks. Several types of attacks against ML and discuss their impact on real-world applications will be introduces. Several challenges and open problems in securing ML in critical adversarial environments will be mentioned.
Handling Uncertainty in Dynamical Systems: Challenges and Opportunities
July 30, 2020
Martine Ceberio, Professor of Computer Science, University of Texas at El Paso
The ability to conduct fast and reliable simulations of dynamical systems is of special interest in many application areas. However, such simulations can be very complex and involve millions of variables, making it prohibitive in CPU time to run repeatedly on many different configurations. Reduced-order modeling methods provide a way to handle such complex simulations using a realistic amount of resources. They allow improving predictions and reducing the risk of decisions on critical applications. In many situations though, uncertainty is present in some aspect of the problem to be handled, or reliability needs to be enhanced.
In this presentation, we will go over challenges and opportunities that present themselves when uncertainty needs to be handled.
Brains, AI, and Silicon – A New Generation of Energy Efficient AI Systems
July 17, 2020
Pin-Yu Chen, Researcher, IBM Research
Despite the fact of achieving high standard accuracy in a variety of ML tasks, deep learning models built upon neural networks have recently been identified having the issue of lacking adversarial robustness. The decision making of well-trained deep learning models can be easily falsified and manipulated, resulting in ever-increasing concerns in safety-critical and security-sensitive applications requiring certified robustness and guaranteed reliability.
We will provide an overview of recent advances in the research of adversarial robustness, featuring both comprehensive research topics and technical depth. We will cover three fundamental pillars in adversarial robustness: attack, defense, and verification. For each pillar, we will emphasize the tight connection between ML and the research in adversarial robustness.
Towards Resilient Machine Learning in Adversarial Environments
June 25, 2020
Jun Zhuang, Professor and Director of the Decision, Risk, and Data Laboratory, University at Buffalo
Society is faced with a growing amount of property damage and casualties from man-made and natural disasters. Developing societal resilience to those disasters is critical but challenging. We will present a sequence of games among players such as federal, local, and foreign governments; private citizens; and adaptive adversaries. In particular, the governments and private citizens seek to protect lives, property, and critical infrastructure from both adaptive terrorists and non-adaptive natural disasters. The federal government can provide grants to local governments and foreign aid to foreign governments to protect against both natural and man-made disasters; and all levels of government can provide pre-disaster preparation and post-disaster relief to private citizens. Private citizens can also, of course, make their own investments. The tradeoffs between protecting against man-made and natural disasters, specifically between preparedness and relief, efficiency and equity, and between private and public investment, will be discussed. Recent research on big data analytics, fire management, and decision modeling on social media rumor spreading and management will also be discussed.
Brains, AI, and Silicon – A New Generation of Energy Efficient AI Systems
June 4, 2020
Dhireesha Kudithipudi, Professor and Director of AI Consortium, the University of Texas at San Antonio
Brains perform massively parallel and complex real-time learning, memory, and perception tasks that often depend on incomplete and noisy data at ultra-low energy. These characteristics are earmarks of an AI system capable of lifelong learning. The subject of this talk is how the study of information processing in different brain regions can inspire the design of sophisticated AI algorithms and energy-efficient hardware platforms. First, brain-inspired algorithms serve as models of how continual processing may be achieved in the brain. We posit that hierarchy, sparse distributed representations, random projections, and plasticity are core features of these algorithms. Second, we argue that devices that exhibit history-dependent conductivity modulation will facilitate complex synaptic and neuronal operations on a compact area and with a small energy footprint. We designed and taped out an on-device learning AI platform for the edge, aka Aurora, which features orders of magnitude improvement in energy dissipated for real-time learning tasks.
Automated Knowledge Discovery: A Case Study
May 27, 2020
Juan Gutierrez, Professor and Chair of the Department of Mathematics, The University of Texas at San Antonio
Given a constellation of data generators operating at heterogeneous frequencies, precisions, and densities, a natural question to ask is: can artificial reasoning extract knowledge from scientific data without human intervention? In this talk we will explore in general the conditions required for automated knowledge discovery to occur, and we will present briefly a proof of concept through a framework named SKED (Scientific Knowledge Extraction from Data) applied to a systems biology problem. The SKED platform is a set of reconfigurable tools that operate on arbitrary data sets to create a layer of uniformity that allows the deployment of complex configurations of model formalisms. SKED ingests data from multiple repositories and transforms all data into a reduced set of "data primitives" that are independent from the underlying storage strategy. All modeling formalisms in SKED accept as inputs only data primitives and produce as outputs only data primitives. Thus, the overhead of reconfiguring these tools to study new problems is minimized and can be automated.
Hybrid Physics and Machine Learning for Atmospheric, Hydrologic, and Climate Systems
May 8, 2020
Auroop Ganguly, Professor, Northeastern University
AI, despite being well positioned to transform fields ranging from computer vision and language translation to drug discovery and intelligent transportation, has occasionally received mixed reception in the context of representing complex nonlinear dynamical systems.
We will discuss corresponding theoretical and practical considerations based on the literature and our own research. Second, we will discuss the excitement, as well as the potential pitfalls, around these areas. We propose a way forward for credibly understanding the role of water in the climate system, with falsifiable hypotheses and testbeds designed around novel and interpretable hybrid physics-ML approaches, potentially leading to new science insights, model improvements, and risk-informed adaptation.
AI for Cybersecurity and Cyber Deception
April 28, 2020
Chris Kiekintveld, Associate Professor, The University of Texas at El Paso
The rapid advancement of AI methods is having a profound impact on cybersecurity, and we can expect this trend to continue. The ability to process massive amounts of data to identify patterns and trends, and to make very complex automated decisions faster than any human can respond, is becoming key to both attack and defense tactics. However, these AI systems also open new attack surfaces and vulnerabilities, as demonstrated by recent work on adversarial ML.
We will provide an overview of challenges and opportunities posed by the intersection of AI and cybersecurity followed by examples drawn from Chris's own work. We will discuss applications of game theory for optimizing cyber deception strategies, including cases with uncertainty and sequential interactions, work on modeling and playing against human opponents in cybersecurity games, and work on adversarial ML including connections to game theory.