May 27, 2021
Research Highlight

Weather Radar Super Resolution with Deep Learning

Using deep neural networks can significantly enhance the resolution of observations from weather radars

Photograph of white metal buildings with radar equipment on top

The newly applied neural network is much better at preserving sharp edges, small-scale features, and complex small-scale variability in radar data.

(Photo: U.S. Department of Energy Atmospheric Radiation Measurement user facility | Flickr.com)

The Science

Operational constraints limit the quality of weather radar data; this is a problem when scientists need to compare the data to higher resolution datasets or need higher resolution to display/3D reconstruction. This work demonstrates that using machine learning-based deep convolutional neural networks can enhance the resolution of already captured weather radar data. These convolutional neural networks, inspired by human visual processing, function by learning to incorporate small- and large-scale features through a series of multi-scale filters. Researchers artificially degraded radar data and taught a neural network to restore the original high-resolution data. The deep-learning approach performed substantially better than conventional methods in terms of both the accuracy and image quality of the results. The technique has broad applications in weather radar operations and research.

The Impact

This work has many applications—from direct radar operations to research. It allows for faster and coarser radar scanning without loss of data quality, better and easier comparisons to high-resolution model and instrument data, and can potentially combat reduction in data quality due to radar beam spread. Additionally, it allows for better 3D visualizations of weather radar data. This study lays out a basis for a super resolution technique that could potentially be applied to a wide variety of instrumentation and weather or climate model data.

Summary

The field of super resolution involves using mathematical techniques, including deep machine learning, to increase the resolution of gridded data beyond their measured resolution. Scientists typically do this via traditional interpolation methods, which estimate pixel values from nearby data by simply performing the same calculation everywhere without incorporating larger-scale contextual information, or by combining a network of radars with overlapping fields of view. Recently, significant progress has been made using convolutional neural networks for single-image super resolution, bypassing the need for multiple overlapping images. Conceptually, a neural network learns the relationships between large-scale cloud features and their associated sub-pixel-scale image variability to outperform previous interpolation schemes. Researchers used a deep convolutional neural network to artificially enhance the resolution of weather radar scans. They trained the model on six months of reflectivity observations from the Langley Hill, Washington radar (KLGX), and found that, based on objective error and perceptual quality metrics, the neural network substantially outperforms common interpolation schemes for 4× and 8× resolution increases. This new technique is particularly adept at resolving the small-scale features and storm edges that are traditionally challenging for interpolation techniques to recover. 

PNNL Contact

Andrew Geiss, Pacific Northwest National Laboratory, Andrew.Geiss@pnnl.gov

Funding

U.S. Department of Energy Atmospheric Radiation Measurement Program Integrated Cloud, Land-Surface, and Aerosol System Study

Published: May 27, 2021

A. Geiss and J. C. Hardin. “Radar Super Resolution Using a Deep Convolutional Neural Network”, Journal of Atmospheric and Oceanic Technology, 37(12), 2197-2207, (2020). [DOI: https://doi.org/10.1175/JTECH-D-20-0074.1]