April 30, 2025
Journal Article

Uncertainty Quantification for Neural Network Potential Foundation Models

Abstract

If neural network potentials (NNPs) are to gain widespread use, researchers must be able to trust model outputs. However, the blackbox nature of neural networks and their inherent stochasticity are often deterrents, especially when placing trust in foundation models trained over broad swaths of chemical space. Uncertainty information provided at the time of prediction will help reduce aversion to NNPs and allow propagation of uncertainties to extracted properties. In this work, we detail two uncertainty quantification (UQ) methods that provide complementary information. Readout ensembling, by finetuning only the readout layers of an ensemble of foundation models, provides information about model uncertainty. Amending the final readout layer to predict upper and lower quantiles replaces point predictions with distributional predictions, which provide information about uncertainty within the underlying training data. We demonstrate our approach with the MACE-MP-0 model, applying UQ to both the foundation model and a series of finetuned models. The uncertainties produced by the ensemble and quantile methods are demonstrated to be distinct measures by which the quality of the NNP output can be judged.

Published: April 30, 2025

Citation

Bilbrey J.A., J.S. Firoz, M. Lee, and S. Choudhury. 2025. Uncertainty Quantification for Neural Network Potential Foundation Models. npj Computational Materials 11, no. 1:Art. No. 109. PNNL-SA-206884. doi:10.1038/s41524-025-01572-y