November 15, 2019
Journal Article

Enforcing constraints for interpolation and extrapolation in Generative Adversarial Networks

Abstract

Generative Adversarial Networks (GANs) are becoming popular choices for unsupervised learning. At the same time there is a concerted effort in the machine learning community to expand the range of tasks in which learning can be applied as well as to utilize methods from other disciplines to accelerate learning. With this in mind, in the current work we suggest ways to enforce given constraints in the output of a GAN both for interpolation and extrapolation. The two cases need to be treated differently. For the case of interpolation, the incorporation of constraints is built into the training of the GAN. The incorporation of the constraints respects the primary game-theoretic setup of a GAN so it can be combined with existing algorithms. However, it can exacerbate the problem of instability during training that is well-known for GANs. We suggest adding small noise to the constraints as a simple remedy that has performed well in our numerical experiments. The case of extrapolation (prediction) is more involved. First, we employ a modified interpolation training process that uses noisy data but does {\it not} necessarily enforce the constraints during training. Second, the resulting modified interpolator is used for extrapolation where the constraints are enforced after each step through projection on the space of constraints.

Revised: August 27, 2020 | Published: November 15, 2019

Citation

Stinis P., T.J. Hagge, A.M. Tartakovsky, and E.H. Yeung. 2019. Enforcing constraints for interpolation and extrapolation in Generative Adversarial Networks. Journal of Computational Physics 397. PNNL-SA-133233. doi:10.1016/j.jcp.2019.07.042