yolov5 version Tags=v2.0, python version 3.7.5
├── coco_data: #root directory
├── train2017 #Training set pictures, about 118287
├── val2017 #Validation set pictures, about 5000
└── annotations # annotation directory
├── instances_train2017.json #The training set annotation file corresponding to target detection and segmentation tasks
├── instances_val2017.json #Validation set annotation file corresponding to target detection and segmentation tasks
├── captions_train2017.json
├── captions_val2017.json
├── person_keypoints_train2017.json
└── person_keypoints_val2017.json
(1) Copy coco/coco2yolo.py and coco/coco_class.txt in the code warehouse to coco_dataroot directory
(2) Run coco2yolo.py
python3 coco2yolo.py
(3) After running the above script, train2017.txt and val2017.txt will be generated in coco_data root directory
Modify the train field and val field in the data/coco.yaml file to point to the train2017.txt and val2017.txt generated in the previous section respectively, such as:
train: /data/coco_data/train2017.txt
val: /data/coco_data/val2017.txt
Install python dependency package according to requirements-GPU.txt
Install the python dependency package according to requirements.txt, and also need to install (NPU-driver.run, NPU-firmware.run, NPU-toolkit.run, torch-ascend.whl, apex.whl)
In order to get the best image processing performance, Please compile and install opencv-python instead of directly installing. The compilation and installation steps are as follows:
export GIT_SSL_NO_VERIFY=true
git clone https://github.com/opencv/opencv.git
cd opencv
mkdir -p build
cd build
cmake -D BUILD_opencv_python3=yes -D BUILD_opencv_python2=no -D PYTHON3_EXECUTABLE=/usr/local/python3.7.5/bin/python3.7m -D PYTHON3_INCLUDE_DIR=/usr/local/python3.7.5/include/python3.7m -D PYTHONARY =/usr/local/python3.7.5/lib/libpython3.7m.so -D PYTHON3_NUMPY_INCLUDE_DIRS=/usr/local/python3.7.5/lib/python3.7/site-packages/numpy/core/include -D PYTHON3_PACKAGES_PATH=/ usr/local/python3.7.5/lib/python3.7/site-packages -D PYTHON3_DEFAULT_EXECUTABLE=/usr/local/python3.7.5/bin/python3.7m ..
make -j$nproc
make install
bash train_npu_1p.sh
bash train_npu_8p_mp.sh
(1) Modify the parameter --coco_instance_path in evaluation_npu_1p.sh to the actual path in the data set. For example, modify the script to
python3.7 test.py --data /data/coco.yaml --coco_instance_path /data/coco/annotations/instances_val2017.json --img-size 672 --weight'yolov5_0.pt' --batch-size 32 - device npu --npu 0
(2) Start the evaluation
bash evaluation_npu_1p.sh
python train.py --data coco.yaml --cfg yolov5x.yaml --weights'' --batch-size 32 --device 0
python -m torch.distributed.launch --nproc_per_node 8 train.py --data coco.yaml --cfg yolov5x.yaml --weights'' --batch-size 256
python train.py --data coco.yaml --cfg yolov5x.yaml --weights'' --batch-size 32 --device cpu
python export_onnx.py --weights ./xxx.pt --img-size 640 --batch-size 1
Model | Size (pixels) |
NPU Nums |
Dataset | Training Data Nums |
Validation Data Nums |
Batch Size |
Epochs | FPS | Total Training Time (H) |
---|---|---|---|---|---|---|---|---|---|
yolo5x | 640 | 1 | COCO-2017 | 118287 | 5000 | 32 | 300 | 51.8 | 188.528 |
yolo5x | 640 | 8 | COCO-2017 | 118287 | 5000 | 256 | 300 | 400.5 | 27.287 |
Model | Size (pixels) |
mAPval 0.0 : 1.0 |
Speed Atlas 300T (ms) |
---|---|---|---|
yolo5x-1p | 640 | yolo5x-1p | 48.5 |
yolo5x-8p | 640 | yolo5x-8p | 50.3 |