A pytorch pose estimation framework using by myself for research.
It contains a pytorch framework which is suitable for pose estimation.
The main partion of it focus on OpenPose reproduction, with simliar mAP in their paper.
You can also found code for my HSI paper in train_offset and train_mask.
Pytorch 1.2.0
Torchvision 0.4.0
TensorboardX
This repro using the environment created byAnaconda
,cudatoolkits
andcudnn
installed by conda automaticly.
- for test, download our pretrained model in dropbox, you also need parpare data by
bash get_data.sh
- open
evaluate.py
, all parameters related to evaluation are shown here.
1160
means OpenPose offical small val dataset, which has been uesd in OpenPose paper.
others
means COCO val 2017, which contains 2693 images or 5000 images, choosen by yourself.
scale
should be[0.5,1.0,1.5,2.0]
to get the maximum accuracy, as same as OpenPose - run
python -m Pytorch_Pose_Estimation_Framework.evaluate --val_type=1160 --network=CMU_old --scale=0.5,1.0,1.5,2.0
to get the mAP result - if you want to run yourselves images, just modify the
main
function inevaluate.py
, actually, just some path need to be changed
- all parameters related to train is in
train_op_baseline.py
- you need prepare the data by
bash get_data.sh
- you need generate mask file by running
generate_mask.py
- you need generate hdf5 file for training, about
200G
disk space is needed. rungenerate_hdf5.py
- check the data path in
datasets/dataloader/cmu_h5_mainloader
- run
python -m Pytorch_Pose_Estimation_Framework.train_op_baseline --network=CMU_old
We empirically trained the model for 55 epochs
and achieved comparable performance to the results reported in the original paper.
We also compared with the offical released caffe model which is by Zhe Cao.
Method | Validation | AP |
---|---|---|
Openpose paper | COCO2014-Val-1k | 58.4 |
Openpose model | COCO2014-Val-1k | 56.3 |
This repo | COCO2014-Val-1k | 58.4 |
This repo is based upon@kevinlin311tw and @tensorboy.
Thanks kevinlin311tw
who is really nice to communicate
Please cite the paper in your publications if it helps your research:
@inproceedings{cao2017realtime,
author = {Zhe Cao and Tomas Simon and Shih-En Wei and Yaser Sheikh},
booktitle = {CVPR},
title = {Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields},
year = {2017}
}
@inproceedings{liu2020resolution,
title={Resolution Irrelevant Encoding and Difficulty Balanced Loss Based Network Independent Supervision for Multi-Person Pose Estimation},
author={Liu, Haiyang and Luo, Dingli and Du, Songlin and Ikenaga, Takeshi},
booktitle={2020 13th International Conference on Human System Interaction (HSI)},
pages={112--117},
year={2020},
organization={IEEE}
}