*Update (15/10/2020): Please check out our recent work published on IEEE TII (paper, code), which yields better results than the MsvNet.
Created by Peizhi Shi at University of Huddersfield
Acknowledgements: We would like to thank Zhibo Zhang for providing the dataset and source code of FeatureNet on Github.
Please note that the code is NOT intended for use in military, nuclear, missile, weaponry applications, or in activities involving animal slaughter, meat production, or any other scenarios where human or animal life, or property, could be at risk. We kindly ask you to refrain from applying the code in such contexts.
The MsvNet is a novel learning-based feature recognition method using multiple sectional view representation. At the time of its release, the MsvNet achieves the state-of-the-art single feature recognition results by using only a few training samples, and outperforms the state-of-the-art learning-based multi-feature recognition method in terms of recognition performances.
This repository provides the source codes of the MsvNet for both single and multi-feature recognition, a reimplemented version of the FeatureNet for multi-feature recognition, and a benchmark dataset which contains 1000 3D models with multiple features.
If this project is useful to you, please consider citing our paper:
@article{shi2020novel,
title={A novel learning-based feature recognition method using multiple sectional view representation},
author={Shi, Peizhi and Qi, Qunfen and Qin, Yuchu and Scott, Paul J and Jiang, Xiangqian},
journal={Journal of Intelligent Manufacturing},
volume={31},
number={5},
pages={1291--1309},
year={2020},
publisher={Springer}
}
This is a peer-reviewed paper, which is available online.
- CUDA (10.0.130)
- cupy-cuda100 (6.2.0)
- numpy (1.17.4)
- Pillow (6.2.1)
- python (3.6.8)
- pyvista (0.22.4)
- scikit-image (0.16.2)
- scipy (1.3.3)
- selectivesearch (0.4)
- tensorflow-estimator (1.14.0)
- tensorflow-gpu (1.14.0)
- torch (1.1.0)
- torchvision (0.3.0)
All the experiments mentioned in our paper are conducted on Ubuntu 18.04 under the above experimental configurations. If you run the code on the Windows or under different configurations, slightly different results might be achieved.
- Get the MsvNet source code by cloning the repository:
git clone https://github.com/PeizhiShi/MsvNet.git
. - Download the FeatureNet dataset, and convert them into voxel models via binvox. The filename format is
label_index.binvox
. Then put all the*.binvox
files in a same folderdata/64/
.64
refers to the resolution of the voxel models. This folder is supposed to contain 24,000*.binvox
files. Please note there are some unlabelled/mislabelled files in category 8 (rectangular_blind_slot) and 12 (triangular_blind_step). Before moving these files in the same folder, please correct these filenames. - Run
python single_train.py
to train the neural network. Please note that data augmentation is employed in this experiment. Thus, the training accuracy is lower than the val/test accuracy.
- Get the MsvNet source code by cloning the repository:
git clone https://github.com/PeizhiShi/MsvNet.git
. - Download the benchmark multi-feature dataset, and put them in the folder
data/
. - Run
python visualize.py
to visualize a 3D model in this dataset.
- Get the MsvNet source code by cloning the repository:
git clone https://github.com/PeizhiShi/MsvNet.git
. - Download the benchmark multi-feature dataset, and put them in the folder
data/
. - Download the pretrained optimal MsvNet and FeatureNet models, and put them int the folder
models/
. These models are trained under the optimal settings (instead of near-optimal settings) mentioned in our paper, which could produce the multi-feature recognition results reported in the paper. - Run
python multi_test.py
to test the performances of the MsvNet and FeatureNet for multi-feature recognition. Please note that the multi-feature recognition part of the FeatureNet is only a reimplemented version. Watershed algorithm with the default setting is employed. Detailed information about the FeatureNet can be found from their original paper.
If you have any questions about the code, please feel free to contact me (p.shi@leeds.ac.uk).