May 15, 2025
Conference Paper
Metric Learning to Accelerate Convergence of Operator Splitting Methods
Abstract
Recent developments in machine learning have led to promising advances in accelerating the solution of constrained optimization problems. Increasing demand for real-time decision-making capabilities in applications such as artificial intelligence and optimal control has led to a variety of proposed strategies for learning to produce fast solutions to optimization problems. For example, recent works have shown that it is possible to accelerate the convergence of optimization algorithms by learning to select their parameters, such as gradient descent stepsizes. This work proposes a new approach, in which the underlying metric spaces of proximal operator splitting algorithms are learned to maximize convergence rate. While prior works in optimization theory have derived optimal metrics in simple cases, no such result exists for many practical problem forms including general Quadratic Programming (QP). This paper shows how differentiable optimization can enable the end-to-end learning of proximal metrics, enhancing the convergence of proximal algorithms for QP problems beyond what is possible based on known theory. Additionally, the results illustrate a strong connection between the learned proximal metrics and active constraints at the optima, leading to an interpretation in which the predicted proximal metrics can be viewed as a form of active set prediction.Published: May 15, 2025