I am a PhD student at Mila Québec & University of Montréal since Fall 2024, working with Professors Pablo Samuel Castro and Glen Berseth. I obtained my MSc at Mila Québec & University of Montréal in Montréal, Canada and my BSc in Data Science and Engineering at Universitat Politècnica de Catalunya (UPC) in Barcelona, Spain.
From Barcelona, Spain
Currently in Montreal, Canada
Research
My research centers on general, autonomous agents built on Deep Reinforcement Learning (RL) and Foundation Models (LLMs, VLMs). I explore how to integrate the structured learning and adaptability of RL with the broad priors and reasoning abilities of foundation models — using them to improve exploration, credit assignment, and skill discovery. I'm particularly interested in how RL can make foundation models more agentic, unifying reasoning and control for general-purpose AI agents.
Interests
Experience
- Research Intern @ Vmax AI
- Research Intern @ Ubisoft LaForge
- Teaching Assistant @ University of Montreal
- Junior Data Scientist @ HP Inc
- Research Assistant @ UPC
- Basketball Coach @ Sagrada Familia Claror
News
Publications
ICLR 2026
Abstract
We present ARM-FM, a framework for automated, compositional reward design in reinforcement learning using foundation models to automatically generate reward machines (formal automata for specifying objectives) directly from natural language. By pairing high-level reasoning of foundation models with reward machines' structured formalism, ARM-FM enables robust, generalizable RL agents and demonstrates effectiveness—including zero-shot generalization—on diverse, challenging environments.
NeurIPS 2025 (Spotlight)
Abstract
This work investigates why scaling deep reinforcement learning networks often degrades performance, identifying the interplay of non-stationarity and gradient pathologies from suboptimal architectures as key causes. Through empirical analysis, we propose simple, easily integrated interventions that stabilize gradient flow, enabling robust performance across varying depths and widths. Our approach is compatible with standard algorithms and achieves strong results across diverse agents and environments, offering a practical path toward scaling deep RL effectively.
ICLR 2024 · ALEO @ NeurIPS 2023
Abstract
We identify that any intrinsic reward function derived from count-based methods is non-stationary and hence induces a difficult objective to optimize for the agent. The key contribution of our work lies in transforming the original non-stationary rewards into stationary rewards through an augmented state representation. We introduce the Stationary Objectives For Exploration (SOFE) framework. Our experiments show that SOFE improves the agents' performance in challenging exploration problems, including sparse-reward tasks, pixel-based observations, 3D navigation, and procedurally generated environments.
RLC 2024 · Oral @ IMOL, NeurIPS 2023
Abstract
Both surprise-minimizing and surprise-maximizing (curiosity) objectives for unsupervised reinforcement learning (RL) have been shown to be effective in different environments, depending on the environment's level of natural entropy. However, neither method can perform well across all entropy regimes. In an effort to find a single surprise-based method that will encourage emergent behaviors in any environment, we propose an agent that can adapt its objective depending on the entropy conditions it faces, by framing the choice as a multi-armed bandit problem. We devise a novel intrinsic feedback signal for the bandit which captures the ability of the agent to control the entropy in its environment.
TMLR 2024
Abstract
Extrinsic rewards can effectively guide reinforcement learning (RL) agents in specific tasks. However, extrinsic rewards frequently fall short in complex environments due to the significant human effort needed for their design and annotation. This limitation underscores the necessity for intrinsic rewards, which offer auxiliary and dense signals and can enable agents to learn in an unsupervised manner. We introduce RLeXplore, a unified, highly modularized, and plug-and-play framework offering reliable implementations of eight state-of-the-art intrinsic reward algorithms.
URL Workshop @ ICML 2021
Abstract
Pre-training Reinforcement Learning agents in a task-agnostic manner has shown promising results. However, previous works still struggle in learning and discovering meaningful skills in high-dimensional state-spaces, such as pixel-spaces. We approach the problem by leveraging unsupervised skill discovery and self-supervised learning of state representations.
Embodied AI Workshop @ CVPR 2021
Abstract
We tackle embodied visual navigation in a task-agnostic set-up by putting the focus on the unsupervised discovery of skills that provide a good coverage of states. Our approach intersects with empowerment: we address the reward-free skill discovery and learning tasks to discover what can be done in an environment and how.
Embodied AI Workshop @ CVPR 2021
Abstract
Defining a reward function in Reinforcement Learning (RL) is not always possible or very costly. For this reason, there is a great interest in training agents in a task-agnostic manner making use of intrinsic motivations and unsupervised techniques. We hypothesize that RL agents will also benefit from unsupervised pre-trainings with no extrinsic rewards, analogously to how humans mostly learn, especially in the early stages of life.
Empirical Software Engineering (EMSE) 2022
Abstract
The construction, evolution and usage of complex artificial intelligence (AI) models demand expensive computational resources. While currently available high-performance computing environments support well this complexity, the deployment of AI models in mobile devices, which is an increasing trend, is challenging. Our objective is to systematically assess the trade-off between accuracy and complexity when deploying complex AI models to mobile devices, which have an implicit resource limitation.
AI Engineering Workshop @ ICSE 2021
Abstract
When building Deep Learning (DL) models, data scientists and software engineers manage the trade-off between their accuracy, or any other suitable success criteria, and their complexity. In an environment with high computational power, a common practice is making the models go deeper by designing more sophisticated architectures. However, in the context of mobile devices, which possess less computational power, keeping complexity under control is a must.
WebNLG Workshop @ EMNLP 2020
Abstract
This work establishes key guidelines on how, which and when Machine Translation (MT) techniques are worth applying to RDF-to-Text task. Not only do we apply and compare the most prominent MT architecture, the Transformer, but we also analyze state-of-the-art techniques such as Byte Pair Encoding or Back Translation to demonstrate an improvement in generalization.
Abstract
In this report we are taking the standardized model proposed by Gebru et al. (2018) for documenting the popular machine translation datasets of the EuroParl and News-Commentary. Within this documentation process, we have adapted the original datasheet to the particular case of data consumers within the Machine Translation area. We are also proposing a repository for collecting the adapted datasheets in this research area.
MSc Thesis
Abstract
This thesis advances intrinsic motivation in reinforcement learning by tackling the instability of non-stationary rewards with SOFE, an approach that stabilizes exploration through augmented states; introducing S-Adapt, an adaptive entropy-based mechanism enabling emergent behaviors without extrinsic rewards; and developing RLeXplore, a standardized framework for consistent implementation of intrinsic reward methods.
BSc Thesis
Abstract
This work focuses on the self-acquirement of the fundamental task-agnostic knowledge available within an environment. The aim is to discover and learn baseline representations and behaviors that can later be useful for solving embodied visual navigation downstream tasks.
Projects
Centralized Control for Multi-Agent RL
Centralized control for multi-agent RL in a complex Real-Time-Strategy game. Final project for COMP579 — Reinforcement Learning at McGill (Prof. Doina Precup, Winter 2023).
xgenius
A command-line tool for managing remote jobs and containerized experiments across multiple clusters. Simplifies Docker/Singularity builds and SLURM job submission.
RLLTE
Long-Term Evolution Project of Reinforcement Learning.
Blokus RL Environment
An implementation of the Blokus board game environment using the Gymnasium framework, designed for training AI agents.
Ball Sort RL Environment
A Reinforcement Learning environment for the Ball Sort Color Puzzle Game based on OpenAI Gym, with baseline Deep RL models.
Wave Defense RL Environment
A Reinforcement Learning environment for a custom Wave Defense game based on OpenAI Gym, with baseline Deep RL models.
Genetic Neuroevolution
An implementation of a Genetic Neuroevolution algorithm in Matlab that learns to play a custom game.
Demo Videos