[Webinar] Neural Operators: a Framework for Scalable Scientific Computing

Registration dates 17 October 2025 23 October 2025
Course dates 23 October 2025 23 October 2025
Registration is now closed
[Webinar] Neural Operators: a Framework for Scalable Scientific Computing

About the webinar

Traditional deep learning typically involves learning mappings between finite-dimensional vector spaces. By contrast, scientific and engineering applications such as weather forecasting and aerodynamics involve modelling complex spatiotemporal processes governed by partial differential equations (PDEs) defined on continuous domains and at multiple scales. In other words, they require learning mappings between infinite-dimensional function spaces.

Neural operators enable this by generalising deep learning to learn mappings directly between function spaces, while offering substantial speed improvements over traditional PDE solvers, often several orders of magnitude faster.

In this talk, the speaker will introduce the fundamental concepts behind neural operators and illustrate their effectiveness on practical problems such as weather forecasting. Finally, the presentation will address computational efficiency and practical implementation aspects in Python, demonstrating how these concepts can be applied in practice using open-source software.

 

This session is part of the VOILA! seminars, organised by EFELIA Côte d’Azur – French School of Artificial Intelligence. These seminars aim to explore the frontiers of AI in an inclusive and open manner, welcoming everyone. The goal is to provide insights and answers to major societal and academic questions on topics such as AI & Environment, AI & Work, AI & Education, AI & Media, AI & Law, AI & Creativity, AI & Health, and much more.

About the speaker

Dr. Jean Kossaifi

Dr. Jean Kossaifi is a Senior Research Scientist at NVIDIA, where he focuses on AI for scientific applications and efficient learning via tensor methods. His research centers on developing foundational algorithms for learning on function spaces using neural operators, as well as integrating tensor algebra with deep learning to create scalable models that are more memory and computation efficient.

Jean is actively working to democratize the state-of-the-art and make it accessible through his open-source software work. He leads several efforts, including TensorLy, a high-level Python library for tensor methods, and NeuralOperator, which implements state-of-the-art algorithms for neural operator learning in PyTorch.

Prior to joining NVIDIA, Jean was a founding Research Scientist at the Samsung AI Center in Cambridge. He earned his PhD and MSc from Imperial College London under the supervision of Prof. Maja Pantic, and holds a French engineering diploma (MEng) in Applied Mathematics, Computing, and Finance, as well as a parallel BSc in Advanced Mathematics.

More information