The Center for AI @ PNNL Helps Keep U.S. at Forefront of AI for Science, Energy, and Security
The arrival and widespread use of generative artificial intelligence, like OpenAI’s ChatGPT and Anthrophic’s Claude, over the past 18 months has raised awareness of AI systems around the world. Our scientists at Pacific Northwest National Laboratory (PNNL) have been working on AI for decades. In fact, there’s so much happening in AI at PNNL that we launched the Center for AI @ PNNL, a virtual research hub, in December 2023 to bring together and coordinate these efforts in a bid to keep the U.S. at the cutting edge of science, security, and technology.
Scientists at the Center are harnessing advances in deep learning, deep reinforcement learning, and generative AI to change how science is conducted and achieve original scientific results and breakthroughs. Through the Center for AI, we will maintain and strengthen PNNL’s leadership in service of the nation through advancing AI's frontier and its application ranging from cybersecurity and grid resilience to climate modeling and proteomics. One breakthrough in AI's frontier is a Lab-developed first-of-its-kind “gray box” deep learning method for automated grid optimization. This technique leverages PNNL-led advances in the field of physics-informed machine learning, which encodes physics knowledge into AI networks to improve their accuracy.
National labs are deeply focused on areas of strategic importance to the nation. One focus of the Center is to assure that AI models and systems are safe and trustworthy, especially when used in high-consequence environments like the power grid and other critical infrastructure. When AI systems are deployed, it is critical to know their calculations are reliable and accurate. AI assurance includes developing techniques to verify and validate that AI systems do what we expect them to do; we want to have the right controls in place to enable trust in the entire AI software ecosystem. There’s also a cybersecurity aspect to our efforts, especially where AI is deployed in software and networked systems. Our responsible and ethical AI efforts at PNNL work to assure the development and deployment of these new, powerful tools factor in accountability, transparency, fairness, and robustness.
Another key area of interest for the Center is the development of an AI-ready workforce. Hundreds of individuals have already been trained in PNNL-taught boot camps, including government sponsors and stakeholders. This includes training and upskilling our staff at PNNL to use AI systems, as well as working with federal sponsors to prepare them to incorporate AI solutions developed at PNNL into their workflows. For example, the Lab is working with DOE’s Office of Policy to accelerate the environmental permitting and siting process for infrastructure projects using generative AI, namely PolicyAI, a policy-specific Large Language Model test bed.
PNNL collaborates with scientists across academia and industry to advance fundamental research in AI across the areas of science, security, and energy. These are projects that are too big and important for a single organization to solve. Researchers at PNNL and The University of Texas at El Paso, for example, are seeking to protect data from security breaches by developing a machine learning model in which two neural networks compete using deep learning methods to become more accurate in their predictions. PNNL also partners with industry leaders to develop and deploy the essential infrastructure needed to train modern AI systems, including the clusters of graphical processing units (GPUs) used to train foundation models. For example, a partnership with Microsoft Azure Quantum demonstrated how AI could help more quickly identify and synthesize possible energy storage materials.
As we develop new solutions for the future, it’s important to remember that we are building on a rich foundation. Since the 1990s, PNNL has been pioneering research in data analytics and visualization. PNNL also has been on the front lines developing AI methods, such as few-shot learning, which enable deep-learning models to need less labeled data and can even reduce computing power requirements. This technique is being used to develop computer vision algorithms powering novel geospatial analytics as part of the Lab’s national security and nonproliferation efforts.
Over the next 10 years, we will experience significant changes in how science is conducted at a fundamental level; breakthroughs in AI will enable increasingly autonomous experiments and empower us to leverage ever-larger datasets for more and more meaningful insights. The Center for AI @ PNNL will keep the Lab at the frontier of AI by developing safe and trustworthy AI models and systems, providing access to data for AI training and next-generation AI platforms, and using the discipline to drive discovery in science, energy efficiency and resilience, and national security.
I’m excited to lead the Center and look forward to sharing more about our research and accomplishments in the months and years to come. In the meantime, I invite you to subscribe to the Center for AI newsletter to stay up to date on our latest news.
Published: May 22, 2024