Antoine Moulin

Hi, I am a PhD student in reinforcement learning (RL). My aim is to gain a deeper understanding of the challenges posed by large-scale RL by identifying and exploiting structural properties of Markov decision processes (MDPs) that make large-scale learning statistically and computationally feasible.

More specifically, I am interested in efficiently solving the exploration-exploitation tradeoff in infinite-horizon MDPs under (linear) function approximation. My research interests also include: online learning, offline reinforcement learning, representation learning, and more broadly learning theory.

I am co-advised by Gergely Neu and Arthur Gretton. You can reach out to me at: firstname [dot] lastname [at] upf [dot] edu.

CV  /  Google Scholar  /  Twitter  /  Github

profile photo
News
  • 10/2024 - The virtual RL theory seminars are back, join us for the new season!
  • 07/2024 - We are organizing an ICML workshop on RL, check it out :)
  • 06/2024 - I am moving to London to intern and work on uncertainty quantification. Specifically, I'll focus on computing finite-time predictive confidence intervals for general function classes.
  • 08/2023 - I'm going to the ELLIS symposium later this month. See you in Helsinki!
  • 07/2023 - I'll present our ICML paper at EWRL16, in Brussels.
  • 04/2023 - Our paper "Optimistic Planning by Regularized Dynamic Programming" has been accepted at ICML 2023. See you in Hawaii!
  • show more
Publications & Preprints

2025


When Lower-Order Terms Dominate: Improved Loss-Range Adaptivity for Experts Algorithms
Antoine Moulin, Emmanuel Esposito, Dirk van der Hoeven
preprint
arxiv [soon]

Spectral Representation for Causal Estimation with Hidden Confounders
Haotian Sun, Antoine Moulin, Tongzheng Ren, Arthur Gretton, Bo Dai
(AISTATS 2025) 28th International Conference on Artifical Intelligence and Statistics.
arxiv

2023

Optimistic Planning by Regularized Dynamic Programming
Antoine Moulin, Gergely Neu
(ICML 2023) 40th International Conference on Machine Learning.
arxiv

Talks

Learning in Adversarial Linear MDPs
04/2024 - University of Tokyo. Tokyo, Japan.

Optimistic Planning by Regularized Dynamic Programming
07/2023 - Stanford University. Stanford, CA.
08/2023 - Princeton University. Princeton, NJ.

Infinite Horizon MDPs under Function Approximation
03/2023 - Universitat Pompeu Fabra. Barcelona, Spain.

Primal-Dual Methods for Reinforcement Learning
09/2022 - Gatsby Unit, UCL. London, UK.

Introduction to JAX
09/2021 - ELLIS Doctoral Symposium 2021. Tübingen, Germany.

Virtual Sculpture
06/2018 - Journée de l'innovation (finalist). Paris, France.



Thanks Jon Barron for this nice template.