Simulation plays a crucial role in assessing autonomous driving systems, where the generation of realistic multi-agent behaviors is a key aspect. In multi-agent simulation, the primary challenges include behavioral multimodality and closed-loop distributional shifts. In this study, we revisit mixture models for generating multimodal agent behaviors, which can cover the mainstream methods including continuous mixture models and GPT-like discrete models. Furthermore, we introduce a closed-loop sample generation approach tailored for mixture models to mitigate distributional shifts. Within the unified mixture model~(UniMM) framework, we recognize critical configurations from both model and data perspectives. We conduct a systematic examination of various model configurations, including positive component matching, continuous regression, prediction horizon, and the number of components. Moreover, our investigation into the data configuration highlights the pivotal role of closed-loop samples in achieving realistic simulations. To extend the benefits of closed-loop samples across a broader range of mixture models, we further address the shortcut learning and off-policy learning issues. Leveraging insights from our exploration, the distinct variants proposed within the UniMM framework, including discrete, anchor-free, and anchor-based models, all achieve state-of-the-art performance on the WOSAC benchmark.
[TBA] Code release is coming soon. Stay tuned!
[2025-06] We're thrilled to share that UniMM has received an Honorable Mention in the Waymo Open Sim Agents Challenge (WOSAC) 2025! Huge thanks to the organizers and congratulations to all the amazing teams!
[2025-01] The paper has been released on arXiv.
If you find this work useful in your research, please consider citing us:
@misc{lin2025revisitmixturemodelsmultiagent,
title={Revisit Mixture Models for Multi-Agent Simulation: Experimental Study within a Unified Framework},
author={Longzhong Lin and Xuewu Lin and Kechun Xu and Haojian Lu and Lichao Huang and Rong Xiong and Yue Wang},
year={2025},
eprint={2501.17015},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2501.17015},
}