Paper ID sheet UCL-INMA-2005.901

Title

Convergence of the Iterates of Descent Methods for Analytic Cost Functions

Authors
P.-A. Absil, R. Mahony, B. Andrews
Abstract
In the early eighties Lojasiewicz proved that a bounded solution of a gradient flow for an analytic cost function converges to a well-defined limit point. In this paper, we show that the iterates of numerical descent algorithms, for an analytic cost function, share this convergence property if they satisfy certain natural descent conditions. The results obtained are applicable to a broad class of optimization schemes and strengthen classical ``weak convergence'' results for descent methods to ``strong limit-point convergence'' for a large class of cost functions of practical interest. The result does not require that the cost has isolated critical points, requires no assumptions on the convexity of the cost, nor any non-degeneracy conditions on the Hessian of the cost at critical points.
Key words
gradient flows; descent methods; real analytic functions; Lojasiewicz gradient inequality; single limit-point convergence; line search; trust region; Mexican hat
Status
SIAM Journal on Optimization, Vol. 16, No. 2, pp. 531-547, 2005
Download
BibTeX entry

  @ARTICLE{AbsMahAnd2005,
  author = "Absil, P.-A. and Mahony, R. and Andrews, B.",
  title = "Convergence of the Iterates of Descent Methods for Analytic Cost Functions",
  journal = "SIAM J. Optim.",
  year = 2005,
  volume = 6,
  number = 2,
  pages = "531--547",
  }
  
Further information
It is natural to combine the results of section 2 with the property that the limit points of (sufficiently smooth) gradient flows are stationary points; see for example Morris W. Hirsch, Stephen Smale, Robert L. Devaney, Differential equations, dynamical systems, and an introduction to chaos, 2004, p. 206. For the discrete-time case, the classical weak convergence results alluded to on page 537 include those in section 1.2.2 of Dimitri P. Bertsekas, Nonlinear Programming, second edition, 1999.
[Home]