Machine Learning :classification and evaluation [ LINGI2262 ]
5.0 crédits ECTS
30.0 h + 30.0 h
1q
Teacher(s) |
Dupont Pierre ;
|
Language |
English
|
Place of the course |
Louvain-la-Neuve
|
Online resources |
Compulsory slides, available on:
Http://www.icampus.ucl.ac.be/claroline/course/index.php?cid=INGI2262
and most generally all the documents (terms of mini-projects) available on the same website.
|
Prerequisites |
Basic knowledge in Probability, Statistics and Algorithmics (as provided by the courses BIR1203, BIR1304 and SINF1121)
|
Main themes |
- Learning as search, inductive bias
- Combinations of decisions
- Loss function minimization, gradient descent
- Performance assessment
- Instance-based learning
- Probabilistic learning
- Unsupervised classification
|
Aims |
Students completing successfully this course will be able to:
- understand and apply standard techniques to build computer programs that automatically improve with experience, especially for classification problems
- assess the quality of a learned model for a given task
- assess the relative performance of several learning algorithms
- justify the use of a particular learning algorithm given the nature of the data, the learning problem and a relevant performance measure
- use, adapt and extend learning software
Students will have developed skills and operational methodology. In particular, they have developed their ability to:
- use the technical documentation to make efficient use of existing packages,
- communicate test results in a short report using graphics.
|
Evaluation methods |
- 25 % assignments (the 4 mini projects)
- 75 % final exam
The miniprojects may not be remake during the 2nd session and the 25% are allready fixed at the end of Q1.
|
Teaching methods |
- Lectures
- Written assignment and/or Miniproject (2 students/group, from 1 to 3 weeks)
- Assignment feedback
|
Content |
- Decision Tree Learning: ID3, C4.5, CART, Random Forests
- Linear Discriminants: Perceptrons, Gradient-Descent and Least-Square Procedures
- Maximal Margin Hyperplanes and Support Vector Machines
- Probability and Statistics in Machine Learning
- Performance Assessment: Hypothesis testing, Comparing Learning Algorithms, ROC analysis
- Gaussian Classifiers, Fisher Linear Discriminants
- Bayesian Learning: ML, MAP, Optimal Classifier, Naive Bayes
- Instance-based learning: k-NN, LVQ
- Clustering Techniques
|
Bibliography |
The mandatory material for this course is defined as the set of documents and slides made available on the icampus website, together with the oral communications and talks given during the weekly lectures. A copy of the lecture slides is the only material that can be consulted during the final examination.
Recommended Books
- Christopher Bishop, Pattern recognition and machine learning.
- Richard O. Duda, Peter E. Hart, et David G. Stork, Pattern Classification
- Trevor Hastie, Robert Tibshirani, et Jerome Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction
- Thomas Mitchell, Machine Learning
- I.H. Witten et Eibe Frank, Data Mining: Practical Machine Learning Tools And Techniques
|
Cycle et année d'étude |
> Master [120] in Computer Science
> Master [120] in Computer Science and Engineering
> Master [120] in Electro-mechanical Engineering
> Master [120] in Mathematical Engineering
> Master [120] in Electrical Engineering
> Master [120] in Statistics: General
> Master [120] in Biomedical Engineering
|
Faculty or entity in charge |
> INFO
|
<<< Page précédente
|