This repository contains the official PyTorch implementations of training and testing of:
- Multi-V2X: A large scale, multi-modal, multi-penetration-rate dataset for operative perception. Learn more here.
- CoRTSG: The first driving safety-oriented testing scenario generation framework for cooperative perception in V2X environment. The results cover 11 risky functional scenarios and 17,490 concrete scenarios. Learn more here.
-
Dataset Support
- OPV2V
- V2XSet
- Multi-V2X
- CoRTSG
- V2V4Real
- DAIR-V2X
-
SOTA cooperative perception methods support
- Where2comm [NeurIPS2022]
- V2X-ViT [ECCV2020]
- Late Fusion
- Early Fusion
-
Intensity Simulation
- CARLA's default point cloud intensity simulation (so as to directly apply models trained with xyzi-channel point cloud to xyz-channel point cloud)
Please refer to the installation.md for detailed documentations.
Download one or more of the following datasets:
- OPV2V in google drive
- V2XSet in google drive
- Multi-V2X in OpenDataLab (search "Multi-V2X" in datasets plate)
- CoRTSG in OpenDataLab (search "CoRTSG" in datasets plate)
We adopt the similar setting as OpenCOOD which uses yaml file to configure all the parameters for training. To train your own model from scratch or a continued checkpoint, run the following commands:
cd OpenCOOD
python opencood/tools/train.py --hypes_yaml ${CONFIG_FILE} [--model_dir ${CHECKPOINT_FOLDER}]
Arguments explanation:
hypes_yaml
: the path of the training configuration file, e.g.,opencood/hypes_yaml/early_fusion.yaml
.- To train models on OPV2V, V2XSet and V2V4Real, see Tutorial 1: Config System to learn more.
- To train models on Multi-V2X, see Tutorial 1: Config System (Multi-V2X) to learn more.
model_dir
(optional): the path of checkpoints. This is used for fine-tuning the trained models. When themodel_dir
is given, the trainer will discard thehypes_yaml
and load theconfig.yaml
in the checkpoint folder.
To train on multiple gpus, run:
cd OpenCOOD
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --use_env opencood/tools/train.py --hypes_yaml ${CONFIG_FILE} [--model_dir ${CHECKPOINT_FOLDER}]
run:
cd OpenCOOD
python opencood/tools/inference.py --model_dir ${CHECKPOINT_FOLDER} --fusion_method ${FUSION_STRATEGY} --dataset_format ${DATASET_FORMAT} [--dataset_root ${DATASET_ROOT}]
Arguments explanation:
model_dir
: the path of the checkpoints.fusion_method
:"no"
,"late"
,"early"
and"intermediate"
supported.dataset_format
:"test"
,"opv2v"
and"multi-v2x"
supported."opv2v"
: used for OPV2V, V2XSet and V2V4Real"multi-v2x"
: used for Multi-V2X"test"
: used for CoRTSG.
dataset_root
(optional): the folder of your dataset. If set,root_dir
inconfig.yaml
would be overwrited. For testing on CoRTSG, you should specify the directory of a functional scenario asdataset_root
.
Thanks for the excellent cooperative perception codebase OpenCOOD.
If you have any problem with this code, feel free to open an issue.
If you find the Multi-V2X dataset useful in your research, feel free to cite:
@article{rongsong2024multiv2x,
title={Multi-V2X: A Large Scale Multi-modal Multi-penetration-rate Dataset for Cooperative Perception},
author={Rongsong Li and Xin Pei},
year={2024},
eprint={2409.04980},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2409.04980},
}
If you find the CoRTSG useful in your research, feel free to cite:
@article{rongsong2024cortsg,
title={CoRTSG: A general and effective framework of risky testing scenario generation for cooperative perception in mixed traffic},
author={Rongsong Li and Xin Pei and Lu Xing},
year={2024}
}