Course Description
( upcoming/recent | all | 2024 | 2023 | 2022 | 2021 | 2020 | 2019 | 2018 | 2017 | 2016 | 2015 | 2014 | 2013 | 2012 | 2011 | 2010 | 2009 | 2008 | 2007 | 2006)Large scale optimization for imaging. From regularized methods to learning
by Nelly Pustelnik, ENS de Lyon, France
http://perso.ens-lyon.fr/nelly.pustelnik/
Description
During the last 20 years, imaging sciences, including inverse problems, segmentation or classification, has known two major revolutions: (i) sparsity and proximal algorithms and (ii) deep learning and stochastic optimization. This course proposes to illustrate these major advances in the context of imaging problems that can be formulated as the minimization of an objective function and to highlight the evolution of these objective functions jointly with optimization advances.
Since 2003, convex optimization has become the main thrust behind significant advances in signal processing, image processing and machine learning. The increasingly complex variational formulations encountered in these areas which may involve a sum of several, possibly non-smooth, convex terms, together with the large sizes of the problems at hand make the use of standard optimization methods such as those based on subgradient descent techniques intractable computationally. Since their introduction in the signal processing arena, splitting techniques have emerged as a central tool to circumvent these roadblocks: they operate by breaking down the problem into individual components that can be activated individually in the solution algorithm. In the past decade, numerous convex optimization algorithms based on splitting techniques have been proposed or rediscovered in an attempt to efficiently deal with such problems. We will provide the basic building blocks for major proximal algorithm strategies and their recent advances in nonconvex and stochastic optimization. Behind non-smooth functions, there is the concept of sparsity which is central in the contributions in inverse problems and compressed sensing. This concept will be described as well as the objective functions relying on it, going from Mumford-Shah model to sparse SVM. Ten years after the start of proximal revolution, deep learning has started to provide a new framework for solving imaging problems going from agnostic techniques to models combining deep learning with standard regularized formulation. The main encountered objective functions as well as the associated algorithmic strategies will be discussed.
Place
This course will take place at UCL, EULER building, 4 Avenue Georges Lemaître, 1348 Louvain-la-Neuve (room A002, ground floor).
Travel instructions are available at: https://uclouvain.be/en/research-institutes/icteam/inma/contacts-and-access-map.html
Note that registration is free but mandatory. The number of participants is limited to 40.
Schedule :
September, Tuesday 10th, 2019:
- 09h00-10h30: First part
- 10h30-11h00: Tea/Coffee break
- 11h00-12h30: Python practical session
- 12h30-13h30: Lunch (Sandwiches/drinks provided)
- 13h30-15h00: Second part
- 15h00-15h30: Tea/Coffee break
- 15h30-17h00: Third part
September, Wednesday 11th, 2019:
- 09h00-10h30: Fourth part
- 10h30-11h00: Tea/Coffee break
- 11h00-12h30: Python practical session
- 12h30-13h30: Lunch (Sandwiches/drinks provided)
- 13h30-15h00: Fifth part
- 15h00-15h30: Tea/Coffee break
- 15h30-17h00: Python practical session
Information concerning the Python practical sessions:
The Python practical sessions make use of a Jupyter Notebook. Participants must come with their personal laptop, with a working installation of python combined with standard scientific libraries (e.g., using the Anaconda distribution).
Please download the files below and make sure that you properly run 'test.ipynb'.
File: https://sites.uclouvain.be/ispgroup/uploads/Softwares/TestCode_NPustelnik.zip