In this talk, I will consider a saddle point problem minx maxy L(x,y) where L is a convex-concave function of the form L(x,y) = f(x) + Φ(x,y)−h(y) with a general coupling term Φ that is not assumed to be bilinear. I will present an accelerated primal-dual (APD) algorithm, which employs a novel momentum term involving the partial gradients of the coupling function. APD can be viewed as a generalization of the method proposed by Chambolle and Pock in 2016 which can only deal with bilinear Φ. Under certain smoothness assumption on Φ, we show that the iterate sequence converges to a saddle point; and for any (x,y), we derive error bounds in terms of L(xk,y)−L(x,yk) for the ergodic sequence {xk, yk}.
In particular, we show O(1/k) rate when L is merely convex in x. Furthermore, assuming Φ(x,·) is linear for each fixed x and f is strongly convex, we obtain the ergodic convergence rate of O(1/k2) – to the best of our knowledge, APD is the first single-loop method in the related literature achieving this rate when Φ is not bilinear. Finally, I will discuss a backtracking technique which does not require the knowledge of Lipschitz constants while ensuring the same convergence resultsIn the later part of this talk, I will customize these results for convex optimization problems with nonlinear functional constraints and show that using the backtracking scheme, the optimal convergence rate can be achieved even when the dual domain is unbounded. We tested our method against other state-of-the-art first-order algorithms for solving quadratically constrained quadratic problems (QCQP): in the first set of experiments, we considered QCQPs with synthetic data, and in the second set, we focused on QCQPs with real data originating from a variant of the linear regression problem with fairness constraints arising in machine learning. This is a joint work with my former PhD student Erfan Yazdandoost Hamedani. Necdet Serhat Aybat is an Associate Professor in the Department of Industrial and Manufacturing Engineering at Pennsylvania State University. He received his Ph.D. degree in operations research from Columbia University, New York, USA in 2011, and his M.S. and B.S. degrees in industrial engineering from Bogazici University, Istanbul, Turkey, in 2005 and 2003, respectively. Aybat's research mainly focuses on first-order methods for large-scale constrained optimization problems from diverse application areas, such as machine learning, distributed optimization, robust matrix decomposition, convex regression, and compressed sensing. His current research involves design and analysis of randomized primal-dual algorithms for solving saddle point problems, and of decentralized methods for multi-agent systems with complicated coupling constraints. His work has been supported by NSF, ARO and ONR research grants. |
Seminar Link: Meeting ID: 969 4557 2901 Passcode: 335897
|