Starting from 1st December 2024, I am a postdoc at the University of Montpellier in France. I am working on new efficient techniques to visualize and analyze spatio-temporal data with the help of deep learning and explainability approaches, in the LIRMM (Laboratoire d'Informatique, de Robotique et de Microélectronique de Montpellier) institute, in France. I am a member of the ADVANSE (``ADVanced Analytics for data SciencE'') team.
I obtained my PhD in computer science at the LaBRI (Laboratoire Bordelais de Recherche en Informatique), in Bordeaux, in 2024 (supervisor: David AUBER). The topic of my PhD thesis is the ``Visualization for the Explanation of Deep Neural Networks''.
I obtained my Master Degree in Computer Sciences (Image and Sound processing) in 2020 at the University of Bordeaux.
I was awarded the Isabelle Attali prize in 2018.
Information Visualization, Deep Learning, XAI (eXplainable Artificial Intelligence), Explainable pruning, Computer Vision
The understanding of the world that surrounds us requires the observation and analysis of the phenomena that
govern it.
The observation of these phenomena requires the systematic collection of large quantities of data to build
models representative of the reality.
Eventually, the models are used to extract knowledge and properties about the object of study.
The more complex the data, the more difficult it becomes for a human to find processing rules or concepts to
extract.
This is where AI (e.g. machine / deep learning) algorithms come in to help.
The superior performance of these algorithms being no longer in question, they are widely used in matters where
decision making is required.
In addition to the worrying ecological footprint of these tools, the ethical aspect is also at stake.
Indeed, these algorithms are often considered as ``black boxes'', making arbitrary decisions without
justification.
The question of responsability, in case of bad decision arises.
It becomes necessary to build systems enabling to understand the behavior (either good or bad) of such
algorithms in order to improve the trust we can put in their decisions.
This problem has given rise to a field of research called XAI (eXplainable AI), which is currently in full
expansion.
Within this research field, I am particularly interested in building new visualizations to highlight AIs
behavior, and to try to provide justifications to their decisions.
On the other hand, I am interested in tackling the over-parameterization problem of neural networks.
Considering neural networks as directed graphs (where nodes are parameters/filters/neurons, and
edges are input/output data), my work consists in studying the role of graph nodes and remove those that
do not have a significant importance in the decisions.
The induced sub-graphs represent sub-networks that achieve comparable performance on a task, with significantly
less parameters, operations required to process input data, computation resources, and thus significantly less
energy consumption.