Installation |
Quick Start |
Implementation Details |
Add Dataset/Environment
Debug & Known Issues |
License |
Acknowledgement
This is the official implementation of "ReinFlow: Fine-tuning Flow Matching Policy with Online Reinforcement Learning".
ReinFlow is a flexible policy gradient framework for fine-tuning flow matching policies at any denoising step.
How does it work?
π First, train flow policies using imitation learning (behavior cloning).
π Then, fine-tune them with online reinforcement learning using ReinFlow!
π§© Supports:
- β 1-Rectified Flow
- β Shortcut Models
- β Any other policy defined by ODEs (in principle)
π Empirical Results: ReinFlow achieves strong performance across a variety of robotic tasks:
- 𦡠Legged Locomotion (OpenAI Gym)
- β State-based manipulation (Franka Kitchen)
- π Visual manipulation (Robomimic)
π§ Key Innovation: ReinFlow trains a noise injection network end-to-end:
- β Makes policy probabilities tractable, even with very few denoising steps (e.g., 4, 2, or 1)
- β Robust to discretization and Monte Carlo approximation errors
Learn more on our π project website or check out the arXiv paper.
- [2025/07/30] Fixed the rendering bug in Robomimic. Now supports rendering at 1080p resolution.
- [2025/07/29] Add tutorial on how to record videos during evaluation in the docs
- [2025/06/14] Updated webpage for a detailed explanation to the algorithm design.
- [2025/05/28] Paper is posted on arXiv!
Please follow the steps in installation/reinflow-setup.md.
To fully reproduce our experiments, please refer to ReproduceExps.md.
To download our training data and reproduce the plots in the paper, please refer to ReproduceFigs.md.
Please refer to Implement.md for descriptions of key hyperparameters of FQL, DPPO, and ReinFlow.
Please refer to Custom.md.
Please refer to KnownIssues.md to see how to resolve errors you encounter.
- Support fine-tuning Mean Flow with online RL
- Possible open-source the WandB projects via a corporate account. (currently is in .csv format)
- Replace figs with videos in the drop-down menu of specific tasks in the webpage.
This repository is released under the MIT license. See LICENSE. If you use our code, we appreciate it if you paste the license at the beginning of the script.
This repository was developed from multiple open-source projects. Major references include:
- TorchCFM, Tong et al.: Conditional flow-matching repository.
- Shortcut Models, Francs et al.: One-step Diffusion via Shortcut Models.
- DPPO, Ren et al.: DPPO official implementation.
For more references, please refer to Acknowledgement.md.