Skip to content

An official implementation of "HVPR: Hybrid Voxel-Point Representation for Single-stage 3D Object Detection" (CVPR 2021) in PyTorch.

License

Notifications You must be signed in to change notification settings

cvlab-yonsei/HVPR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PyTorch implementation of "HVPR: Hybrid Voxel-Point Representation for Single-stage 3D Object Detection"

no_image

This is the implementation of the paper "HVPR: Hybrid Voxel-Point Representation for Single-stage 3D Object Detection (CVPR 2021)".

Our code is mainly based on OpenPCDet. We also plan to release the code based on PointPillars. For more information, checkout the project site [website] and the paper [PDF].

Dependencies

  • Python >= 3.6
  • PyTorch >= 1.4.0

Update

  • 20/06/21: First update

Installation

  • Clone this repo, and follow the steps below (or you can follow the installation steps in OpenPCDet).
  1. Clone this repository:

    git clone https://github.com/cvlab-yonsei/HVPR.git
  2. Install the dependent libraries:

    pip install -r requirements.txt
  3. Install the SparseConv library from spconv.

  4. Install pcdet library:

    python setup.py develop

Datasets

  • KITTI 3D Object Detection
  1. Please download the official KITTI 3D object detection dataset and organize the downloaded files as follows (the road planes could be downloaded from [road plane], which are optional for data augmentation in the training):
    HVPR
    ├── data
    │   ├── kitti
    │   │   │── ImageSets
    │   │   │── training
    │   │   │   ├──calib & velodyne & label_2 & image_2 & (optional: planes)
    │   │   │── testing
    │   │   │   ├──calib & velodyne & image_2
    ├── pcdet
    ├── tools
  2. Generate the data infos by running the following command:
    python -m pcdet.datasets.kitti.kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/kitti_dataset.yaml

Training

  • The config files is in tools/cfgs/kitti_models, and you can easily train your own model like:
    cd tools
    sh scripts/train_hvpr.sh 
  • You can freely define parameters with your own settings like:
    cd tools
    sh scripts train_hvpr.sh --gpus 1 --result_path 'your_dataset_directory' --exp_dir 'your_log_directory'

Evaluation

  • Test your own model:
    cd tools
    sh scripts/eval_hvpr.sh

Pre-trained model

Bibtex

@article{noh2021hvpr,
  title={HVPR: Hybrid Voxel-Point Representation for Single-stage 3D Object Detection},
  author={Noh, Jongyoun and Lee, Sanghoon and Ham, Bumsub},
  journal={arXiv preprint arXiv:2104.00902},
  year={2021}
}

References

Our work is mainly built on OpenPCDet codebase. Portions of our code are also borrowed from spconv, MemAE, and CBAM. Thanks to the authors!

About

An official implementation of "HVPR: Hybrid Voxel-Point Representation for Single-stage 3D Object Detection" (CVPR 2021) in PyTorch.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages