This repository contains the download link and usage instructions for the new dataset.
🔥 [26/09/24] We are pleased to announce that our paper has been accepted to NeurIPS 2024 Datasets and Benchmarks Track.
🔥 [02/09/24] We have uploaded the source code of several trackers implemented on our benchmark.
You can download the dataset from the following link: BaiDuYun (8yy3) and GoogleDrive.
This dataset OVT-B can be used as a new benchmark to the research of OVMOT.
Instructions on how to use the dataset:
- Download the dataset and annotation.
- Extract the files.
- Copy the
CLASS
,base_id
, andnovel_id
from ovtb_classname.py and add them to the classname.py file under the roi_head folder of the ov detector. - Modify the
data_root
in the configs to the path where the OVT-B folder is located. Changeann_file
to the path of ovtb_ann.json,img_prefix
to data_root+'OVT-B', andprompt_path
to the path of ovtb_class.pth. - Then test/evaluate by TAO-type/COCO-type dataset eval tools/codes.
├── OVT-B
│ ├── AnimalTrack
│ │ ├── subdir
│ │ │ ├── img.jpg
│ │ │ └── ...
│ ├── GMOT-40
│ ├── ImageNet-VID
│ ├── LVVIS
│ ├── OVIS
│ ├── UVO
│ ├── YouTube-VIS-2021
├── ovtb_ann.json
├── ovtb_class.pth
├── ovtb_classname.py
├── ovtb_prompt.pth
└── OVTB-format.txt
For detailed information on the baseline method, please refer to the OVTrack.
If you use this dataset in your research, please cite it as follows:
@article{haiji2024ovtb,
title={OVT-B: A New Large-Scale Benchmark for Open-Vocabulary Multi-Object Tracking},
author={Liang, haiji and Han, Ruize},
journal={arXiv preprint arXiv:2410.17534},
year={2024}
}
- Thanks TETA for providing the evaluation code.
- Thanks DetPro for providing the pytorch reimplementation of VilD.
- Thanks OVTrack for providing the baseline of OVMOT.
- Thanks MMTracking for providing the code of OC-SORT, ByteTrack and StrongSORT.