May 15, 2025
Conference Paper

Metric Learning to Accelerate Convergence of Operator Splitting Methods

Abstract

Recent developments in machine learning have led to promising advances in accelerating the solution of constrained optimization problems. Increasing demand for real-time decision-making capabilities in applications such as artificial intelligence and optimal control has led to a variety of proposed strategies for learning to produce fast solutions to optimization problems. For example, recent works have shown that it is possible to accelerate the convergence of optimization algorithms by learning to select their parameters, such as gradient descent stepsizes. This work proposes a new approach, in which the underlying metric spaces of proximal operator splitting algorithms are learned to maximize convergence rate. While prior works in optimization theory have derived optimal metrics in simple cases, no such result exists for many practical problem forms including general Quadratic Programming (QP). This paper shows how differentiable optimization can enable the end-to-end learning of proximal metrics, enhancing the convergence of proximal algorithms for QP problems beyond what is possible based on known theory. Additionally, the results illustrate a strong connection between the learned proximal metrics and active constraints at the optima, leading to an interpretation in which the predicted proximal metrics can be viewed as a form of active set prediction.

Published: May 15, 2025

Citation

King E., J.S. Kotary, F. Fioretto, and J. Drgona. 2024. Metric Learning to Accelerate Convergence of Operator Splitting Methods. In IEEE 63rd Conference on Decision and Control (CDC 2024), December 16-19, 2024, Milan, Italy, 1553 - 1560. Piscataway, New Jersey:IEEE. PNNL-SA-196845. doi:10.1109/CDC56724.2024.10886873

Research topics