Paper ID sheet UCL-INMA-2016.09

Title

Global rates of convergence for nonconvex optimization on manifolds

Authors
Nicolas Boumal, P.-A. Absil, Coralia Cartis
Abstract
We consider the minimization of a cost function $f$ on a manifold $\mathcal{M}$ using Riemannian gradient descent and Riemannian trust regions (RTR). We focus on satisfying necessary optimality conditions within a tolerance $\varepsilon$. Specifically, we show that, under Lipschitz-type assumptions on the pullbacks of $f$ to the tangent spaces of $\mathcal{M}$, both of these algorithms produce points with Riemannian gradient smaller than $\varepsilon$ in $\mathcal{O}(1/\varepsilon^2)$ iterations. Furthermore, RTR returns a point where also the Riemannian Hessian's least eigenvalue is larger than $-\varepsilon$ in $\mathcal{O}(1/\varepsilon^3)$ iterations. There are no assumptions on initialization. The rates match their (sharp) unconstrained counterparts as a function of the accuracy $\varepsilon$ (up to constants) and hence are sharp in that sense. These are the first deterministic results for global rates of convergence to approximate first- and second-order Karush-Kuhn-Tucker points on manifolds. They apply in particular for optimization constrained to compact submanifolds of $\mathbb{R}^n$, under simpler assumptions.
Key words
complexity; gradient descent; trust-region method; Riemannian optimization; optimization on manifolds
Status
IMA Journal of Numerical Analysis, Volume 39, Issue 1, January 2019, Pages 1–33
Download
Errata
A correction to the publisher's version has been published: doi:10.1093/imanum/drz064
[Home]