Fundamentals of Supervised learning with a focus on Deep learning


Massih-Reza Amini, Université Grenoble Alpes, St Martin d'Hères, France

Schedule and place

This 15-hour course will take place in 6 sessions over three days on October 29, 30, 31, 2018, at UCL, Bâtiment Euler, 4 Avenue Georges Lemaître, 1348 Louvain-la-Neuve (room A002, ground floor).

Schedule : 6 hours/day, from 9:30 to 12:30 and from 13:30 to 16:30 (including a 30-minute coffee break in each session).

Travel instructions are available here.


In this lecture, we expose the theory of machine learning according to the framework developed by Vapnik. In particular, we present the notion of consistency that guarantees the learning of a prediction function. This study leads to the second principle called structural risk minimization, which states that learning is a compromise between a weak empirical error and a strong function class capability. This first stage will serve as a basis for our description of some classical machine learning algorithms including Neural Networks, recently referred to as Deep learning. For this purpose, we will draw a historical perspective of the field, introducing the main challenges, concepts and its evolutions. We will present what distinguishes these models from other machine learning or statistical techniques, by describing some of the recent advances and by trying to put in evidence some future challenges.

Course material

Personal laptop equipped with Python language and Jupyter environment

Slides for the lectures are available below:



Written report

For more information, please click on the link below: