Prof. Laurent Jacques
ICTEAM/ELEN, Image and Signal Processing Group (ISPGroup)
Université catholique de Louvain (UCL)
Web page

Schedule and place

This course will take place on March 7, 9, 10, 14, 16 and 18, 2016 at UCL, Bâtiment Euler, 4 av. Georges Lemaître, 1348 Louvain-la-Neuve (room A002, ground floor).

There will be 6 lectures, each scheduled over 9h30 - 12h30.

Travel instructions are available here.


Since about 10 years, we observe a radical rethinking of the way “informative” signals can be acquired. In short, a new paradigm, the compressed sensing (CS) theory, has been shown to generalize the Shannon-Nyquist theorem by stating that any “signal” can be sampled at a rate depending on its intrinsic “complexity” rather than according to its frequency band-limit.

The reason of this success lies on two basic ingredients. First, the sampling must be generalized to any linear, and generally random, observation process, that is, not only performed in a point- or pixel-wise way. Second, the method reconstructing the signal from these linear observations has to be formulated as the (non-linear) resolution of an inverse problem regularized by our knowledge of the low-complexity signal model (or prior).

For instance, as natural images have been proved to be “sparse” in a collection of certain elementary wave-functions, that is, they can be approximated by linearly combining a few basic elements (e.g., wavelets), their standard pixelization, often reaching several millions of pixels in common end-user cameras, can be replaced by a significantly smaller number of observations representing only a small percentage of the initial number of pixels! However, this reduction does not prevent us to reconstruct any observed image as the sparsity prior, that is, our low-complexity signal model, allows us to recover or approximate this image amongst the infinite set of those matching the same observations.

Interestingly, this compressive sampling principle can be exported to the sensing of 1-D signals or of high dimensional data (e.g., video, biomedical volumes, astronomical images) as soon as a “computable” low complexity prior exists for the reconstruction to work properly.

This doctoral course will introduce all the intuitive and mathematical concepts making the CS theory possible. We will start by reviewing the foundations of low-complexity signal models lying at the frontier of approximation theory and of computational harmonic analysis. The Fourier transform, the wavelets or the curvelets representations, and the low-rank representations of matrix data will thus be explained. In particular, we will see how these methods reach different approximation power as the representation complexity grows.

Then, we will focus on the concept of incoherent sensing, that is, how certain linear observations preserves the intrinsic structure of low-complexity signals. This part will introduce important aspects of high dimensional statistics related to random matrix theory and measure concentration. Key results, such as the Johnson-Lindenstrauss lemma, and their connection with the CS theory will be explained.

A part of the course will describe the actual optimization programs and greedy algorithms that allows us to efficiently recover signals from their compressive observations, that is, by solving the implicit inverse problem posed by the compressive signal sampling. We will review how the convergence, the stability and the robustness of these methods can be proved from the properties of both the sensing model and the signal prior. We will also briefly explain how the non-smooth convex optimization problems induced by the CS theory can be solved by proximal algorithms.

We will also discover the interplay of CS theory with the unavoidable quantization of any sensing procedure, that is, with the standard analog-to-digital data conversion operated in actual sensing devices in order to efficiently transmit, store or process the recorded sampling. This interaction will lead us to the definition of interesting mathematical questions in high dimensional geometry, with for instance the study of certain embedding properties for (1-bit) quantized random projections.

Finally, we will provide an overview of recent applications of the CS theory. In particular, we will see how this theory participates to the field of “computational sensing” initiated by Fenimore in the 70s with the design of coded aperture X-ray imagers and how the same philosophy extends to recent acquisition improvements in in astronomy, bio-medical imaging, optics, computer vision and sensor design.

List of topics

  • Sparse signal models: from Fourier to *-lets transforms (wavelets, curvelets, ...)
  • Beyond sparsity: grouped sparsity and low-rank priors
  • Random projections and Johnson-Lindenstrauss lemma
  • General overview of compressed sensing theory (incoherent sensing)
  • Signal recovery in compressed sensing: optimization and greedy methods
  • Quantizing compressed sensing and quasi-isometric embeddings
  • Compressed sensing applications


Basic understanding of matrix algebra, Fourier transform, convex optimization and statistics.

Course material

The course will be based on recent articles and book chapters related to the topics listed above. An interesting mathematical presentation of the Compressed Sensing theory is available in the following book:

  • Simon Foucart, Holger Rauhut, “A Mathematical Introduction to Compressive Sensing”, ISBN 978-0-8176-4948-7

as well as in these freely available documents:

For a comprehensive overview of many research on compressed sensing and related fields, these websites are also very useful:

Slides of the course

Introduction - Part 1 - Part 2 - Part 3 - Part 4


To be discussed with the students at the beginning of the course.