June 18, 2024

PNNL, NVIDIA Host LLM Day Amid Generative AI Surge

Center for AI @ PNNL's new event brings together AI experts and Lab researchers

A podium during an event with a researcher next to it, alongside the logo for Nvidia LLM Day and PNNL's Center for AI.

Evan Acharya, senior solutions architect with Nvidia, speaking at LLM Day.

(Composite image by Shannon Colson | Pacific Northwest National Laboratory)

Even before generative artificial intelligence (AI) became an overnight sensation last year, researchers at Pacific Northwest National Laboratory (PNNL) were working to leverage the dazzling technology for transformative scientific research. The newly formed Center for AI @ PNNL, in partnership with NVIDIA, recently hosted a joint “LLM Day.” During the day—part of a new series of collaborative events hosted by the Center for AI—NVIDIA AI experts engaged with PNNL scientists on opportunities to make generative AI a powerful tool for science.

The event focused on large language models (LLMs): generative AI models that ingest massive amounts of information—typically text, such as websites or research papers—and use that information to generate summaries, answers, or new content. The promise of LLMs in research is tantalizing, but the nascent, fast-moving technology—which can be prone to hallucinations and inaccuracies—poses a number of challenges to researchers.

“The scientific community is sitting on a lot of data,” said Geetika Gupta, director of product management for NVIDIA, during LLM Day’s morning presentations, adding that generative AI “is a tool that the scientific community can use to interact with the data that they have collected over so many years.”

NVIDIA opened the day by presenting an overview of LLMs’ rapid evolution, but the talks quickly evolved into a dialogue between NVIDIA speakers and PNNL scientists on how to tailor and use LLMs for domain-specific fields. One emphasis of the day: retrieval-augmented generation (RAG)—a powerful process for augmenting LLMs with custom data to keep them current and domain-aware without needing to retrain an enormous AI model.

“Conversational models like ChatGPT are trained on general knowledge from the internet—so this knowledge or training data is frozen in time,” explained Praveen Nakshatrala, a senior solutions architect at NVIDIA, adding that scientists often need fresher, domain-specific data outside the training data. “That’s where RAG comes in.”

Later in the day, the roles reversed, with PNNL researchers presenting on everything from policy and infrastructure for LLMs to specific projects like ChemReasoner—an LLM-driven tool for catalyst discovery. The following day, dozens of PNNL researchers dropped in to speak with experts from the Lab and NVIDIA about their own projects.

“Events like these are absolutely crucial,” said Nancy Washton, a PNNL chemist who has been working with LLMs since late 2022. “We have a large cohort of computer scientists, but there are technical challenges for experimentalists like me who are recognizing that this developing area is going to enhance our ability to deal with a lot of the challenges that humanity is facing.”

“Having NVIDIA come out to the Lab—and having the event geared toward our needs—sifted out all the chaff and allowed us to focus on the most pressing topics for our research,” Washton added. “It was invaluable.”

The recent LLM Day reflects PNNL’s growing emphasis on operationalizing generative AI for science. Last year, the Lab launched the Center for AI to explore the frontiers of AI, and just recently, the Lab provided Laboratory-Directed Research and Development funding for a specific initiative on generative AI for scientific discovery. Projects like ChatGrid (a generative AI tool for power grid visualization) and the AI Incubator (PNNL’s internal general-purpose generative AI tool) have already produced promising results.

“PNNL has done some great work in using LLMs in their workflows to accelerate and augment,” said Yuliana Zamora, a senior solutions architect with NVIDIA.

“We’re interested in early access to NVIDIA Blackwell (and eventually, Rubin) to quantify their performance benefits for new and future generative AI methods," added James Ang, chief scientist for computing at PNNL.

To stay informed about the Center for AI @ PNNL and the Laboratory's ongoing innovations in artificial intelligence, subscribe to our newsletter.

Published: June 18, 2024