January 30, 2025
Journal Article
Stacked networks improve physics-informed training: applications to neural networks and deep operator networks
Abstract
Physics-informed neural networks and operator networks have shown promise for effectively solving equations modeling physical systems. However, these networks can happen to be difficult or impossible to train accurately. We present a novel multifidelity framework for stacking physics-informed neural networks and operator networks that facilitates training. We successively build a chain of networks, where the output at one step can act as a low-fidelity input for training a longer chain, gradually increasing the expressivity of the learnt model. The equations imposed at each step of the iterative process can be the same or different (akin to simulated annealing). The iterative (stacking) nature of the proposed method allows us to learn progressively features of a solution which could have been hard to learn directly. Through benchmark problems including a nonlinear pendulum, the wave equation, and the viscous Burgers equation, we show how stacking can be used to improve the accuracy and reduce the required size of physics-informed neural networks and operator networks.Published: January 30, 2025