Probabilistic Model Discovery with Uncertainty Quantification

PI: David Barajas-Solano
Traditional deep learning methods for model discovery led to high-dimensional representations that are difficult to interpret and sensitive to measurement errors. In addition, predictive capacity or physical meaning of learned models is lacking or difficult to interpret. We aim to enable improved understanding of physical systems through the discovery of dynamical models from partial and noisy observations in time
Our approach is to learn the simplest mathematical model of dynamics that accurately predicts observed data. We will learn these representations via hierarchical Bayesian sparse regression, selecting candidate models for dynamical equations from libraries of possible models. We will develop easily interpretable probabilistic measures of credibility of learned models that are difficult to compute using state-of-the-art learning systems. Such probabilistic measures will be easy to interpret in terms of intuitive notions of probability. Our approach, based on parsimonious representations, enables interpretability by design and is robust against incomplete and noisy data.
Model discovery capacity will be impactful for systems subject to partially known or unknown dynamics, such as power grid and weather modeling. By discovering unknown models, modelers will be able to better predict systems and gain understanding of the underlying physical processes driving these systems.