August 29, 2019
Feature

AI Ups Response Time when the Grid Goes Down

PNNL researchers develop algorithm to assess problems in real time

AI for the grid

Trouble on the electric grid might start with something relatively small: a downed power line, or a lightning strike at a substation. What happens next? The answer can determine the difference between an isolated problem and a regional blackout.

The U.S. grid is a technological marvel that nonetheless relies on human decision-making to run smoothly. Grid operators have developed emergency control procedures to prevent the worst, but humans in control rooms must first decide which measures to apply and when. This takes time—in some cases, too much time. In 2011, for example, a major blackout affecting about 2.7 million customers in the Southwest occurred over the span of 11 minutes, and grid operators weren't fast enough to stop it.

As extreme weather and new variables like rooftop solar make managing a modern grid even more complex, blackouts remain a problem: Average U.S. customer interruptions totaled eight hours in 2017, nearly double the 2016 number.

To help grid operators improve response time and contain problems before they escalate, a team of researchers at Pacific Northwest National Laboratory (PNNL) is using a type of artificial intelligence called deep reinforcement learning. The key technology behind computerized board game champion AlphaGo, deep reinforcement learning involves feeding data to a machine, then teaching it to spot patterns and solve for unseen problems using a neural network, a set of algorithms that mimics the brain.

Using PNNL's advanced grid modeling and simulation capabilities, the researchers developed an adaptive emergency control algorithm to provide decision support to grid operators under extreme conditions. Initial study results showed the algorithm can reduce the number of affected customers during emergency events by 20 to 65 percent and reduce grid recovery time by 16 percent on average in a test environment. The team, which includes researchers Jie Tan of Google Brain and Weituo Hao of Duke University, published the details of their work in a paper, "Adaptive Power System Emergency Control using Deep Reinforcement Learning," in IEEE Transactions on Smart Grid.

Today, grid operators define emergency control procedures offline and in advance. When a problem occurs, a control room operator needs to consult a notebook or spreadsheet to find the predefined scenario closest to what is happening and then apply it. Because these controls are predefined, they tend to be overly conservative or not very effective when they meet with the power system as it exists during the event.

Instead, artificial intelligence could identify and recommend the best course of action in real time, delivering a solution that is tailored to the actual system conditions.

"In our test environment, the algorithm could generate a countermeasure in milliseconds—much faster than a human operator would come up with a solution." says Qiuhua Huang, a power system research engineer at PNNL who is leading the research together with his PNNL colleague Renke Huang. PNNL funded this work with Laboratory Directed Research and Development funds as part of the Deep Learning for Scientific Discovery Agile investment.

Deep reinforcement learning departs from conventional reinforcement learning, which relies on discrete tables of data, by scaling better and generalizing from existing patterns to new, unforeseen problems. This method is particularly useful in assessing large power systems with hundreds or thousands of assets that need to be managed.

"Our method reduces uncertainties because we shorten the decision-making time," Huang says, similar to how an hour-ahead weather forecast tends to be more reliable than a day-ahead one.

Weather, in fact, is one such uncertainty on the grid. Being able to calibrate more closely with this and other conditions that change from minute to minute can help the grid operate more efficiently, both during an emergency situation and in the absence of one.

In future work funded by ARPA-E under the OPEN 2018 Program, PNNL and project partners Google, V&R Energy, and PacifiCorp plan to expand on the current algorithm, adapting it for larger systems such as the U.S. Western Interconnection system, which serves all or portions of the 14 Western states, to account for different uncertainties across days, hours, and minutes. The ultimate goal is to safeguard the grid, cut costs, and reduce the number of customers left waiting for the power to come back on.

PNNL Research Team: Qiuhua Huang, Renke Huang, and Rui Fan

###

About PNNL

Pacific Northwest National Laboratory draws on its distinguishing strengths in chemistry, Earth sciences, biology and data science to advance scientific knowledge and address challenges in sustainable energy and national security. Founded in 1965, PNNL is operated by Battelle for the Department of Energy’s Office of Science, which is the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science. For more information on PNNL, visit PNNL's News Center. Follow us on Twitter, Facebook, LinkedIn and Instagram.