March 26, 2021
Feature

Decoding the “Black Box” of AI to Tackle National Security Concerns

Of wolves and huskies, cats and dogs—and nuclear nonproliferation

Explainable AI illustration

Explainable AI is enabling new ways to support long-term missions in national security, from mitigating biological and chemical threats to detecting and monitoring nuclear explosions around the globe. Understanding and explaining how an AI system parses complex information to make decisions is central to the process.

(Composite image by Timothy Holland | Pacific Northwest National Laboratory)

(Second in a series of four articles about explainable AI at PNNL; read the first article here)

For all the promise artificial intelligence (AI) holds for addressing serious issues, discussion of the topic often starts with talk about animals. Cats and dogs are the most popular. Maybe that’s because pretty much everyone knows about cats and dogs, offering an easy entry point into heady discussions about neural networks, natural language processing, and the nature of intelligence.

At Pacific Northwest National Laboratory (PNNL), Tom Grimes begins the conversation about explainable AI by talking about wolves and huskies.

A few years back, scientists created a program designed to learn how to sort pictures of huskies from pictures of wolves, and the system seemed to learn to distinguish the two. But when scientists tested the system with a fresh batch of photos, the program failed miserably. Why?

It turned out that most of the photos of huskies had been taken indoors and most of the photos of wolves had been taken outdoors. The program had not learned how to sort dogs and wolves; it had learned to sort based on the background—separating indoors from outdoors.

That’s the type of mistake that cannot be allowed to happen when it comes to national security.

Exploring the basis of AI insights

When the discussion involves the detection of nuclear explosions or the movement of materials that endanger the nation’s security, scientists, policy makers, and others demand to know the basis of AI-based insights. Explainable AI—understanding and explaining the reasoning behind AI decisions—is a growing priority for national security specialists. The U.S. Department of Energy’s National Nuclear Security Administration (NNSA) and its Office of Defense Nuclear Nonproliferation Research and Development (DNN R&D) are supporting a team of PNNL researchers that is developing next-generation AI expertise in this critical space.

Whether in national security, finance, or health, a decision made by a cold silicon box, without explanation, is no more palatable than a decision made without any explanation by a closed group of executives. The deeper understanding that comes courtesy of explainable AI is important for moving projects forward; for instance, it’s what led researchers to pinpoint the difficulty in the dogs—wolves problem above.

"Oftentimes, we can’t say exactly why a system makes a certain decision, though we do know that it’s been correct 99 times out of 100. We want to know exactly why it has made all those correct decisions—what were the factors and how they were weighted? That understanding makes the decisions much more trustworthy,” said Mark Greaves, a PNNL scientist involved in the Laboratory’s explainable AI efforts.

Think of PNNL scientist Emily Mace, who spends her days combing through thousands of signals, searching for the critical few that could indicate potential nuclear activity. Hard-wired into her neurons—in a thought process hard to replicate artificially—are the features she uses to prioritize which signals to inspect more closely. Her knowledge about traits like pulse shape, timing, duration, and place of origin equip her to make decisions about whether signals are from cosmic rays, stray electrical noise, radon, or an unknown radioactive source. (Mace has just undertaken a three-year project, also funded by NNSA’s Office of Defense Nuclear Nonproliferation Research and Development, to enhance the work.)

Such deeper analysis is readily available as part of the national security mission at PNNL. Events might not be common, but understanding is deep.

“In the national security space, we often find ourselves in situations where we don’t have the data we’d like to have to solve the problem,” said Grimes, who is working with colleagues Greaves, Luke Erikson, and Kate Gibb. “Instead of relying on techniques developed for situations where data are abundant and the training environment is a good match for the test environment, we need to adjust our network designs to entice the network to ignore the background and latch onto the signal. Similar to the wolf and husky example, we have to make sure the network is using the right aspects of the data to make its decisions. Just as important, we then need to verify that it has done so. This is where explainability tools are invaluable.

"You want to trust this network; you need to trust this network,” said Grimes. “It would be ideal if you could train networks in such a manner that they always predicted correctly and always used the correct criteria to make sound decisions. Unfortunately, we can’t assume that. We need to understand exactly how it arrives at decisions."

Grimes’ work on developing capabilities to explain AI has led to new solutions to a challenge in national security: How to squeeze out reliable conclusions from slim and incomplete data.

# # #

(Coming next: Sidestepping the thin-data problem)

###

About PNNL

Pacific Northwest National Laboratory draws on its distinguishing strengths in chemistry, Earth sciences, biology and data science to advance scientific knowledge and address challenges in sustainable energy and national security. Founded in 1965, PNNL is operated by Battelle for the Department of Energy’s Office of Science, which is the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://www.energy.gov/science/. For more information on PNNL, visit PNNL's News Center. Follow us on Twitter, Facebook, LinkedIn and Instagram.