Major Mathematics, Image, Data

Imagery methods are at the core of several applications, including medical imagery, biology, environmental issues among others. The corresponding approaches require specific methods displayed in this major. Links with other mathematical domains, including statistical and machine learning, will be enlightened.

Courses of this major are associated to
– refresher courses at the begining of the academic year
– a curriculum core during the first semester
– two additional course to be chosen among those proposed in the other majors (see the welcome page for a complet list).

Geometrical approaches for image and shapes

Image, shape and dataset: different types, acquisition, Images, formes et nuages de points : types, acquisition, acquisition defects.

Different types of investigated problems (denoising, segmentation, classification, reconstruction, compression…)

Total variation model : mathematical fundation, applications, numerical methods

Length, area, curvature: definition, application, numerical methods

Spectral analysis of images and shapes, applications.

Neural networks (deep, hybrids, specialized) : structures and learning principles, applications (geometrical) to image, shape and data set processing.

Variationnal methods for inverse problems in medical imagery

Objective

The objective of this course is the numerical solution of inverse problems from medical imaging with linear (X-ray, Radon transform) and nonlinear examples (elastography, thermo-acoustic or photo-acoustic imaging, electrical impedance imaging). The well/ill posed well/ill conditioned character will be studied and regularization techniques based on variational approaches will be presented. This work will be followed by a study and implementation of algorithms for solving an inverse problem.

Key words

– Inverse problems in medical imaging
– Variational approach
– adjoint state method
– Singular value decomposition
– Tikhonov regularization
– Smooth (H1) and non-smooth (total variation) regularization

Competences

– To know examples of inverse problems encountered in medical imaging
– Know how to propose an algorithm to solve linear and non linear inverse problems
– Master different techniques of variational regularization
– Know how to implement (in Matlab) efficiently the different algorithms
– Know how to appropriate a research article in imaging

Program

1. Mathematical modeling in medical imaging
2. Optimization and adjoint state method
3. Singular value decomposition and Tichonov regularization
4. H1 regularization and total variation
5. Application to the Radon transform and EIT and/or thermoacoustics

Independent work

– Work on research paper, appropriation of the mathematical and numerical model

Sparsity and high dimension

Sparsity and convexity are ubiquitous notions in Machine Learning and Statistics. In this course, we study the mathematical foundations of some powerful methods based on convex relaxation: L1-regularisation techniques in Statistics and Signal Processing;

These approaches turn out to be Semi-Definite representable (SDP) and hence tractable in practice.

The theoretical part of the course will focus on the performance guarantees for theses approaches and for the corresponding algorithms under the sparsity assumption. The practical part of this course will present the standard SDP solvers for these learning problems.

Keywords: L1-regularisation; Matrix Completion; K-Means; Graph Clustering; Semi-Definite Programming;

Neural networks

The goal of this course is twofold:

  • Introduce the principles behind deep neural networks, and the associated implementation for adressing classification and regression problems.
  • Propose an overview of mathematical tools associated to modern learning methods based on these networks

The course will start with the universal approximation property of neural networks. Then, we will investigate the improvement brought by the deepness of neural newtorks for precise function approximations with a given computation cost.

Some tools allowing to deal with learning issues during the training process on large data set will be provided, together with some convergence results.

Finally, statistical results on deep neural networks generalisation garanties will be presented, both in the (classical) underfitting scenario, and in the overfitting case, lerading the ‘double-descent’.