Major Mathematics, Images, Data

Image processing and data processing are at the core of many applications, including artificial intelligence, medical imaging, aerial and satellite imaging, spatial ecology, robotics, quality control in industry, weather forecasting, computer graphics, and more.

This major aims to introduce key mathematical tools—deterministic, statistical, and probabilistic—fundamental to image and data processing. It is closely linked to the Mathematics, Statistical Learning, and Machine Learning major, with which it shares two courses.

Courses of this major are associated to
– refresher courses at the beginning of the academic year
– a core curriculum during the first semester
– two additional courses to be chosen among those proposed in the other majors (see the welcome page for a complete list).

Geometric approaches for images and shapes

Images, shapes, and point clouds: different types, acquisition, acquisition defects.

Different types of investigated problems: denoising, segmentation, classification, reconstruction, compression…

Total variation model: mathematical foundations, applications, numerical methods

Length, area, curvatures: tools from differential geometry, approximations, applications, numerical methods

Spectral analysis of images and shapes, applications.

Neural networks (deep, hybrid, specialized): structures and learning principles, applications (particularly geometric) to image, shape, and point cloud processing.

Variational methods for inverse problems in medical imaging

Objective

The objective of this course is the numerical solution of inverse problems from medical imaging with linear (X-ray, Radon transform) and nonlinear examples (elastography, thermo-acoustic or photo-acoustic imaging, electrical impedance imaging). The well/ill posed well/ill conditioned character will be studied and regularization techniques based on variational approaches will be presented. This work will be followed by a study and implementation of algorithms for solving an inverse problem.

Key words

– Inverse problems in medical imaging
– Variational approach
– adjoint state method
– Singular value decomposition
– Tikhonov regularization
– Smooth (H1) and non-smooth (total variation) regularization

Competences

– Being familiar with examples of inverse problems encountered in medical imaging
– Mastering different techniques of variational regularization
– Being familiar with standard algorithms for solving linear and non linear inverse problems
– Knowing how to implement (in Matlab) efficiently the different algorithms
– Being able to understand a research article in imaging

Program

1. Mathematical modeling in medical imaging
2. Optimization and adjoint state method
3. Singular value decomposition and Tichonov regularization
4. H1 regularization and total variation
5. Application to the Radon transform and EIT and/or thermoacoustics

Independent work

– Work on research paper, appropriation of the mathematical and numerical model

Sparsity and high dimension

Sparsity and convexity are ubiquitous notions in Machine Learning and Statistics. In this course, we study the mathematical foundations of some powerful methods based on convex relaxation: L1-regularisation techniques in Statistics and Signal Processing;

These approaches turn out to be Semi-Definite representable (SDP) and hence tractable in practice.

The theoretical part of the course will focus on the performance guarantees for theses approaches and for the corresponding algorithms under the sparsity assumption. The practical part of this course will present the standard SDP solvers for these learning problems.

Keywords: L1-regularisation; Matrix Completion; K-Means; Graph Clustering; Semi-Definite Programming;

Neural networks

The goal of this course is twofold:

  • Introduce the principles behind deep neural networks, and the associated implementation for adressing classification and regression problems.
  • Propose an overview of mathematical tools associated to modern learning methods based on these networks

The course will start with the universal approximation property of neural networks. Then, we will investigate the improvement brought by the deepness of neural newtorks for precise function approximations with a given computation cost.

Some tools allowing to deal with learning issues during the training process on large data set will be provided, together with some convergence results.

Finally, statistical results on deep neural networks generalisation garanties will be presented, both in the (classical) underfitting scenario, and in the overfitting case, lerading the ‘double-descent’.