- R11922029 吳泓毅
- R10922026 吳勝濬
- R10922102 林正偉
conda create -n yolov7 python=3.9
conda activate yolov7
pip install -r requirements.txt
You need to create datasets folder by yourself
|----code
|----datasets----DIP(extract below datasets folder)
python train.py --data DIP.yaml --weights yolov7_training.pt --cfg ./cfg/training/<model>.yaml --img <img size> --epoch <epoch_num> --freeze <freeze layers> --batch-size <batch_size>
<model>.yaml: choose model strucures which are put in cfg/training folder
<img size>: training size and validation size
<epoch_num>: training epoch num, 100 for img-size 640, 50 for img-size 1280
<freeze layers>: fixed backbone weights before assigned layer nums, 50 for fixed backbone, 52 for fixed backbone and neck, if you don't want fixed any weight, just don't use this argument.
<batch_size>: choose batch size depend on GPU
python detect.py --source ../data/ --weights <weights path> --img <img size>
<weights folder>: path to weights
<img size>: inference size 640 or 1280 depend on training size
The results on testing images are here
Expand
- https://github.com/WongKinYiu/yolov7
- https://github.com/AlexeyAB/darknet
- https://github.com/WongKinYiu/yolor
- https://github.com/WongKinYiu/PyTorch_YOLOv4
- https://github.com/WongKinYiu/ScaledYOLOv4
- https://github.com/Megvii-BaseDetection/YOLOX
- https://github.com/ultralytics/yolov3
- https://github.com/ultralytics/yolov5
- https://github.com/DingXiaoH/RepVGG
- https://github.com/JUGGHM/OREPA_CVPR2022
- https://github.com/TexasInstruments/edgeai-yolov5/tree/yolo-pose