If you’ve ever been stuck in traffic during rush hour, you know how important traffic signal control is. Optimizing signals based on varying traffic conditions can be a tough job; things like weather conditions and accidents disrupt the usual flow of traffic.
Autonomous vehicles present a similar challenge. A safe and reliable autonomous system would need to be able to instantaneously adapt to changing road, traffic, and weather conditions—much like a human driver would.
According to a new paper led by Pacific Northwest National Laboratory (PNNL) Researcher Sai Munikoti, combining two different machine learning techniques may be the key to solving both of these challenges.
“Deep reinforcement learning is great for solving optimization problems,” said Munikoti. “And graph neural networks excel at learning classification and prediction tasks on unstructured data—like traffic flow patterns. Combining the two approaches gives the best of both worlds.”
Munikoti’s paper, which he co-authored with PNNL Chief Data Scientist Mahantesh Halappanavar and collaborators from Kansas State University and ETH Zürich, is the first to provide an in-depth review of the challenges and opportunities the fusion of these techniques presents.
“From climate science to drug design, many research areas can benefit from algorithms that combine these techniques,” said Munikoti.
“We have traditionally solved hard combinatorial problems through specifically designed algorithms, but they lack the ability to learn and therefore are not flexible” said Halappanavar. “Although challenges such as correctness and transparency remain, fusing deep reinforcement learning with graph neural networks provides us opportunities to develop computationally efficient approaches for a diverse range of combinatorial problems and applications.”