Get in Touch

Course Outline

The course is divided into three distinct days, with the third day being optional.

Day 1 - Machine Learning & Deep Learning: Theoretical Concepts

1. Introduction to AI, Machine Learning & Deep Learning

- History, fundamental concepts, and common applications of artificial intelligence, distancing ourselves from the myths surrounding the field.

- Collective intelligence: Aggregating shared knowledge among many virtual agents.

- Genetic algorithms: Evolving a population of virtual agents through selection.

- Standard Machine Learning: Definition.

- Types of tasks: Supervised learning, unsupervised learning, reinforcement learning.

- Types of actions: Classification, regression, clustering, density estimation, dimensionality reduction.

- Examples of Machine Learning algorithms: Linear Regression, Naive Bayes, Random Forest.

- Machine Learning VS Deep Learning: Problems where Machine Learning remains the state-of-the-art today (Random Forests & XGBoosts).

2. Fundamental Concepts of Neural Networks (Application: Multi-layer Perceptron)

- Review of mathematical basics.

- Definition of a neural network: Classic architecture, activation functions, weighting of previous activations, network depth.

- Definition of neural network learning: Cost functions, back-propagation, stochastic gradient descent, maximum likelihood.

- Modeling a neural network: Modeling input and output data according to the type of problem (regression, classification, etc.). Curse of dimensionality. Distinction between multi-feature data and signal. Choosing a cost function based on the data type.

- Approximating a function with a neural network: Presentation and examples.

- Approximating a distribution with a neural network: Presentation and examples.

- Data Augmentation: How to balance a dataset.

- Generalization of neural network results.

- Initializations and regularizations of a neural network: L1/L2 regularization, Batch Normalization, etc.

- Optimizations and convergence algorithms.

3. Standard ML/DL Tools

A brief presentation outlining advantages, disadvantages, position in the ecosystem, and usage is planned.

- Data management tools: Apache Spark, Apache Hadoop.

- Standard Machine Learning tools: Numpy, Scipy, Scikit-learn.

- High-level DL frameworks: PyTorch, Keras, Lasagne.

- Low-level DL frameworks: Theano, Torch, Caffe, Tensorflow.

Day 2 - Convolutional and Recurrent Networks

4. Convolutional Neural Networks (CNN).

- Presentation of CNNs: Fundamental principles and applications.

- Fundamental operation of a CNN: Convolutional layer, kernel usage, padding & stride, feature map generation, pooling layers. 1D, 2D, and 3D extensions.

- Presentation of various CNN architectures that set the state-of-the-art in image classification: LeNet, VGG Networks, Network in Network, Inception, ResNet. Presentation of the innovations introduced by each architecture and their broader applications (1x1 Convolution or residual connections).

- Use of attention models.

- Application to a standard classification case (text or image).

- CNNs for generation: Super-resolution, pixel-by-pixel segmentation. Presentation of the main feature map augmentation strategies for image generation.

5. Recurrent Neural Networks (RNN).

- Presentation of RNNs: Fundamental principles and applications.

- Fundamental operation of the RNN: Hidden activation, backpropagation through time, unrolled version.

- Evolution toward GRUs (Gated Recurrent Units) and LSTMs (Long Short-Term Memory). Presentation of the different states and evolutionary improvements brought by these architectures.

- Convergence problems and vanishing gradient.

- Types of classic architectures: Time series prediction, classification, etc.

- RNN Encoder-Decoder architecture. Use of an attention model.

- NLP applications: Word/character encoding, translation.

- Video applications: Predicting the next image in a video sequence.

Day 3 - Generative Models and Reinforcement Learning

6. Generative Models: Variational AutoEncoder (VAE) and Generative Adversarial Networks (GAN).

- Presentation of generative models, link with CNNs covered on Day 2.

- Autoencoder: Dimensionality reduction and limited generation.

- Variational Autoencoder: Generative model and approximation of data distribution. Definition and use of latent space. Reparameterization trick. Applications and observed limitations.

- Generative Adversarial Networks: Fundamental principles. Two-network architecture (generator and discriminator) with alternating learning, available cost functions.

- GAN convergence and difficulties encountered.

- Improved convergence: Wasserstein GAN, BeGAN. Earth Mover's Distance.

- Applications in image or photograph generation, text generation, super-resolution.

7. Deep Reinforcement Learning.

- Presentation of reinforcement learning: Controlling an agent in an environment defined by a state and possible actions.

- Use of a neural network to approximate the state function.

- Deep Q-Learning: Experience replay, and application to video game control.

- Policy optimization. On-policy & off-policy. Actor-Critic architecture. A3C.

- Applications: Control of a simple video game or a digital system.

Requirements

Engineer level

 21 Hours

Number of participants


Price per participant

Testimonials (2)

Upcoming Courses

Related Categories