Artificial Intelligence
Artificial Intelligence
Applying machine learning
to science and security challenges
Applying machine learning
to science and security challenges
Over the past decade, artificial intelligence (AI) has experienced a renaissance. AI enables machines to learn and make decisions without being explicitly programmed.
AI has enabled a new generation of applications, opening the door to breakthroughs in many aspects of daily life. From situational awareness to threat analysis and detection, online signals to system assurance, Pacific Northwest National Laboratory (PNNL) is advancing the frontiers of scientific research and national security by applying AI to scientific problems.
The recently established Center for AI @PNNL drives strategic initiatives and shapes the future direction of AI research at the laboratory. The Center is a crucial component in supporting the Department of Energy’s Frontiers in Artificial Intelligence for Science, Security and Technology (FASST) initiative, which will drive advances in AI-ready data, frontier-scale compute, and safe, secure, and trustworthy models, thereby accelerating discovery in science, security, and technology.
Starting with the right environment
For machine learning models, domain-specific knowledge can enhance domain-agnostic data in terms of accuracy, interpretability, and defensibility. Our AI research has been applied across a variety of domain areas from national security to the electric grid and Earth systems. Leveraging a deep expertise in the power grid domain, PNNL’s DeepGrid open-source platform uses deep reinforcement learning to help power system operators create more robust emergency control protocols—the safety net of our electric grid.
As part of the Physics-Informed Learning Machines for Multiscale and Multiphysics Problems Center, our researchers are developing physics-informed machine learning techniques. We are also developing generative models for molecular structures and scalable implementations of deep reinforcement learning algorithms. This work is part of the Exascale Computing Project’s codesign center, ExaLearn.
Building stronger AI systems
PNNL takes a holistic approach to research focused on assuring the safety, security, interpretability, explainability, and general robustness of AI-enabled systems deployed in the real world. This research includes understanding and mitigating system failures caused by design and development flaws, as well as the malicious activities of adversaries.
Revealing the reasoning behind deep-learning-based decisions is a critical component of assuring safety, security, and robustness. This reasoning allows our researchers to assess complex systems from the perspective of digital and physical system security, as well as from development and operational perspectives.
Forecasting real-world events
PNNL’s research in content intelligence focuses on the development of novel AI models to explain and predict social systems and behaviors related to national security challenges in the human domain. Our expertise in descriptive, predictive, and prescriptive analytics ranges from disinformation detection and attribution to forecasting real-world events such as influenza outbreaks and cryptocurrency prices.
PNNL’s interactive tools like CrossCheck, ESTEEM, and ErrFilter not only assure we develop robust and generalizable AI models, but also advance understanding and effective reasoning about extreme volumes of dynamic, multilingual, and diverse real-world data.
Integrating across missions
Data engineering is foundational to data science, focusing on information flow from data sources to application. Combining this capability—including expertise in data architectures and pipelines, data collection, and validation—with AI enables cross-functional teams to provide optimal solutions to critical mission spaces.
PNNL is teaming with Sandia National Laboratories and the Georgia Institute of Technology on the Center for ARtificial Intelligence-focused ARchitectures and Algorithms (ARIAA). The ARIAA team is exploring how to apply AI to address the U.S. Department of Energy’s mission needs in cybersecurity, graph analytics, and other areas.
We also partner with Oak Ridge National Laboratory, the University of Arizona, and the University of California, Santa Barbara, to develop mathematical foundations for data-driven decision control for complex systems, such as high-energy physics facilities and smart buildings.
Further, our researchers integrate current approaches for scientific high-performance computing, deep learning, and graph analytics computing paradigms into a converged, coherent computing capability to accelerate scientific discovery.
Gaining great insights from small datasets
While almost all research on few-shot learning is done exclusively on images, PNNL researchers have shown success in other data types, including text, audio, and video. This has greatly expanded our AI capabilities beyond traditional, publicly available image datasets and allows researchers to quickly build machine learning models using small amounts of user-classified training examples
Sharkzor, for example, combines human interaction with machine learning techniques to allow classification using just five to 10 images—far fewer than the hundreds or thousands needed for traditional deep learning.
Learn more about how Sharkzor can be applied to nuclear forensics analysis.