Main repository of the project: https://github.com/bobarna/bme-image-processing
- Download some dataset of cars into the
number-plates-hun/images
folder. - Download yolov7 weights trained for number plate recognition:
- Run inference with trained yolov7 weights:
python detect.py --weights yolov7-number-plates-trained.pt --img-size 448 --source number-plates-hun/images --name number-plates-recognition --save-txt --save-conf --nosave --project inference --exist-ok
- Move detected
*.txt
labels into folderinference/labels
:mv number-plates-hun/number-plates/recognition/labels number-plates-hun/labels
- Cut out all detected objects:
python cutout.py number-plates-hun
- Results are in the
number-plates-hun/found-classes
folder.
python detect.py --weights weights-number-plates.pt --img-size 448 --source number-plates-hun/ --name test-number-plates --save-txt --save-conf
--weights
: pretrained weights (result of the transfer learning)--img-size
: size used for the inference--source
: folder containing the images--name
: name for this inference--save-txt
: also saves the labels as*.txt
files--save-conf
: also saves the confidence in the*.txt
files
(detect.py
could also take in single images instead of a whole directory.)
Each line of a detection (image_name.txt
) takes the following form:
object_id x_min x_max y_min y_max confidence
object_id
: describes which object is detected (in our case, this is always 0 for the number plate class)x_min
,x_max
,y_min
,y_max
: describe the dimension of the bounding boxconfidence
: 0..1 value for the confidence of the given detection.
(We modified detect.py
to output detections in image-space, instead of the
original relative dimensions.)
python3 train.py --workers 8 --device 0 --batch-size 8 --data data/number-plates.yaml --img 420 --cfg cfg/training/yolov7-number-plates.yaml --weights yolov7_training.pt --name yolov7-custom --hyp data/hyp.scratch.custom.yaml
For more details, see: