Schedule and place
This 15-hour course will take place in six morning sessions, on Monday October 17, Wednesday October 19, Friday October 21, Monday October 24, Friday October 28 and Monday October 31, 2016 (note there is no course on October 26). Each session is scheduled from 9:30 to 12:30 (including a 30-minute break).
The course will take place in room c.035 at CORE (Center for Operations Research and Econometrics), in Louvain-la-Neuve. CORE is located near the train station, on Voie du Roman Pays 34. Travel instructions.
Optimization is an extremely useful discipline from applied mathematics, with applications virtually in all domains of science, engineering, economics & management. Moreover there is a clear trend toward solving larger and larger models, which take more date into account, offer more accuracy and ultimately prove more useful.
Convexity is a key concept when solving large-scale continuous optimization problem: indeed, very large classes of convex optimization problems can be solved using dedicated algorithms in a very efficient way, both in theory and in practice.
This course will focus on algorithmic convex optimization. It will give a detailed presentation of a number of methods designed to solve specific classes of convex optimization models. For each class of methods, the lecturers will explain when and why they can be useful, provide example applications, describe their behaviour and characterize their performances. At the end of the course, the students will have a good overview of what algorithms are available to solve convex problems and a good understanding of the strengths and weaknesses of each method.
The course will be organized in six lectures:
- Lecture 1: Introduction to convex optimization (detecting convexity, classes of convex problems, examples)
- Lecture 2: Complexity of black-box optimization (lower bounds and optimal methods)
- Lecture 3: Smoothing technique (accelerating the resolution of non-smooth problems)
- Lecture 4: First-order methods: dealing with inexactness and performance estimation
- Lecture 5: Huge-scale optimization (sparsity, coordinate descent, sublinear-cost iterations)
- Lecture 6: Second-order methods (interior-point methods and self-concordance)
Slides of the course and pointers to the relevant papers can be found in this shared dropbox folder.
To be discussed with the students at the beginning of the course.