Joint ISP Group and MLG Group seminar
Title: "Learning Deep Neural Network"
Speaker: Romain Hérault, Maître de conférences, INSA, Rouen, France (invited talk)
Location: "Shannon" Seminar Room, Place du Levant 3, Maxwell Building, 1st floor
Date / Time (duration): Wednesday 29/04/2015, 10h45 (~ 45')
In this talk we will addresses the problem of learning Deep Neural
Network (DNN) through the use of smart initializations or
regularizations. Moreover, we will look at recent applications of DNN to
structured output problems (such as image labeling or facial landmark
- Introduction to supervised learning, why using regularization ? why looking for sparsity ?
- Introduction to perceptron, multilayer perceptron and back-propagation
- Deep Neural Network and the vanishing gradient problem
- Smart initializations and topologies (stacked autoencoders, convolutional neural networks)
- Regularizing (denoising and contractive AE, dropout, multi-obj)
- Deep architecture for high dimensional output or structured output problems
- Y. Bengio, A. Courville, P. Vincent, "Representation Learning: A Review and New Perspectives," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1798-1828, Aug., 2013 (arXiv:1206.5538)
- Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. R. (2012). Improving neural networks by preventing co-adaptation of feature detectors. (arXiv:1207.0580).
- LeCun, Y., Kavukcuoglu, K., & Farabet, C. (2010, May). Convolutional networks and applications in vision. In Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on (pp. 253-256). IEEE.
- J. Lerouge, R. Herault, C. Chatelain, F. Jardin, R. Modzelewski, IODA: An input/output deep architecture for image labeling, Pattern Recognition, Available online 27 March 2015, ISSN 0031-3203, doi:10.1016/j.patcog.2015.03.017.