Lecturer

Dominique Orban
GERAD and
Department of Mathematics and Industrial Engineering
Ecole Polytechnique
Montréal, Québec, Canada
Web page
Contact

Schedule and place

This course will take place on June 1, 3, 5, 8, 10, 12, 2015 at UCL, Bâtiment Euler, 4 av. Georges Lemaître, 1348 Louvain-la-Neuve (room A002, ground floor).

Schedule :

  • Lectures : June 1, 3, 5, 8, 10, 12, 2015 - 9h30-12h30 (6 lectures)
  • Practice sessions (4 sessions) :
    • Monday June 1, 2015 - 13h30-15h30
    • Wednesday June 3, 2015 - 13h30-15h30
    • Monday June 8, 2015 - 13h30-15h30
    • Wednesday June 10, 2015 - 13h30-15h30

Travel instructions are available here.

Description

Gradient-based methods for smooth optimization are typically variants of Newton's method and need to compute a step at each iteration. With the exception of first-order methods, the step almost invariably requires the solution of a large and sparse linear system. The latter is often regarded as representing the optimality conditions of a well-chosen optimization subproblem. Whether this interpretation is employed or not, most practical methods rely on the factorization of matrices involving the first, and sometimes second, derivatives of the objective and constraints. Unfortunately, in many large-scale applications, assembling those derivatives is either not feasible or not reasonable. Fortunately, the systems to be solved typically have a saddle-point structure. In our opinion, solid matrix-free optimization methods are currently lacking and it is an area where numerous fields of application could benefit.

In this short course, we:

  • Review families of iterative methods that are appropriate for large saddle-point systems with and without regularization.
  • Examine how an appropriate regularization of the overarching optimization problem naturally yields regularized linear systems.
  • Describe how such iterative methods may be implemented efficiently.

We recommend that everyone come equipped with their laptop computer. The lectures will be complemented by practice sessions and software will be provided in the following three programming languages :

List of topics

  • Regularization methods for optimization and linear systems
  • Krylov-type methods for linear systems
  • Saddle-point and quasi-definite operators
  • Implementation considerations

Prerequisites

  • Optimization and linear algebra at an undergraduate level (Newton's method, penalty method, augmented Lagrangian, SQP method, interior-point method, eigenvalue decomposition, singular value decomposition)
  • A keen interest to discover the interplay of optimization and linear algebra
  • Proficiency in at least one programming language used in scientific computing, either interpreted (e.g., Matlab, Python, Julia, Octave, Scilab) or compiled (C or Fortran) is necessary. The lecturer can offer assistance to varying degrees with all of the above, but not with C++ or Java.

Course material

Course notes

Instructions for the practice sessions

List of Modules and Packages to Install
Reading MatrixMarket Files
Using L-BFGS Linear Operators
Setting up a Sparse Cholesky or Sparse LDL Linear Operator

Evaluation

To be discussed with the students at the beginning of the course.