-
Notifications
You must be signed in to change notification settings - Fork 8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calculated anchors work worse on custom small-object dataset #2960
Comments
@lizyn Hi, The reason is because
So in your case better to use default anchors or these anchors, masks and filters:
https://github.com/AlexeyAB/darknet#how-to-improve-object-detection
I think your objects are small enought, but the mAP is the closer to 99%, the more difficult to improve it. Also you can try to use |
Sorry that didn't read the README carefully enough! I'll try your suggested anchors. But may I ask why the rule is like this (>60, 30-60...)? I mean, if I wanna explain this rule in a report, how do I give the reasons? A brief guide would be really helpful, thanks! |
Just tested experimentally. This is related to the following calculations. Just count subsampling layers (maxpool or convolutional layers with stride=2) With these changes
3rd yolo layer has 2 subsampling layers, 2^2 = 4, so size of objects should be >8x8 |
@lizyn After training with new anchors - show the Loss & mAP chart. |
@AlexeyAB, I have posted the charts training with new anchors. |
@lizyn |
Yes, I think so. Thanks for all your help! May I mention you in the acknowledgements of my report? Well, I'm doing it as long as it doesn't bother you. |
@lizyn Yes, you can. In your way or something like this: #2782 (comment) |
After I calculated anchors for -num_of_clusters 9 -width 1024 -height 1024, I have In yolov3-spp.cfg I modified the "stride stuffs" as said in #how-to-improve-object-detection for small objects (stride=4, layers = -1, 11) |
Yes. |
Hi @AlexeyAB, In the beginning ,I was just using recalculations of the anchors for yolov3 for small object detection: 2nd yolo layer should be used for anchors (and objects) with: 60x60 > sizes > 30x30 3rd yolo layer should be used for anchors (and objects) with sizes < 30x30" The following is the change : filters =6 (I have one class) [yolo] filters = 6 [yolo] filters = 48 [yolo] However, after training, I got same or worse results than just using 4, 12, 5, 23, 7, 33, 11, 22, 10, 50, 19, 34, 14, 61, 26, 65, 50, 69 I was wondering whether I have done it right. Secondly ,if I want to use the guidance for tiny-yolo v3 with 6 anchors .Does the guidance apply . [yolo] filters = 36 [yolo] I would be really grateful if you could assist me with this . Many thanks in advance . |
Hi again @AlexeyAB Mask=0,1,2,3,5 for 3rd yolo layer less than 30x30 If we have a anchor recalculation of 8, 27, 12, 52, 22, 43, 17, 79, 38, 66, 25,112, 36,129, 61,133, 109,130 I truly appreciate all your guidance. |
There is no one correct answer, just try both cases and get with higher mAP. |
@AlexeyAB "In the beginning ,I was just using recalculations of the anchors for yolov3 for small object detection: 2nd yolo layer should be used for anchors (and objects) with: 60x60 > sizes > 30x30 3rd yolo layer should be used for anchors (and objects) with sizes < 30x30" The following is the change : filters =6 (I have one class) [yolo] filters = 6 [yolo] filters = 48 [yolo] However, after training, I got same or worse results than just using 4, 12, 5, 23, 7, 33, 11, 22, 10, 50, 19, 34, 14, 61, 26, 65, 50, 69 I was wondering whether I have done it right. Secondly ,if I want to use the guidance for tiny-yolo v3 with 6 anchors .Does the guidance apply . [yolo] filters = 36 [yolo] |
Try to use [yolo] |
Thank you so much .I will try that . |
@ggolkar Also try to train this yolov3-tiny-pan-3l cfg-file with default anchors: This cfg-file is much less demanding on the correct location of the masks for anchors. And show Loss & mAP chart. You should use exactly this repo https://github.com/AlexeyAB/darknet since there is fixed |
Thanks alot .I truly appreciate your help. |
Hello, Could you explain this more detail..? |
@AlexeyAB
This is in the condition with 3 But if in the condition with 4
|
@nyj-ocean Yes. If you use width height ~416x416 - ~608x608 in cfg. |
problem
My dataset consists of images with consistent size of 1024x1024 and the objects are small — mainly between 5-30, so I cropped the original images to 416x416 ones with overlaps. There is only one class of objects.
With default anchors, I achieve mAP@0.5 around 99% (well, because this is relatively an easy task). But with calculated anchors, I get mAP only about 98%. What could be the reasons?
There is an issue of a similar problem #1200, but the given reason is not that relevant to mine situation.
steps to reproduce
I used the following command to calculate anchors:
./darknet detector calc_anchors data/obj_1/obj.data -num_of_clusters 9 -width 416 -height 416 -show
the output is
the generated
cloud.png
:And here is the config file. I copied it from
yolov3.cfg
and changed nearly nothing except thefilters
andanchors
, as well as the "stride stuffs" as said in #how-to-improve-object-detection for small objects.Sorry to bother you, but I really need your help, @AlexeyAB.
You may wonder why I'm so obsessed with the 1% mAP difference. That's because I'm writing a project report and it's important that I have better understanding of it.
The text was updated successfully, but these errors were encountered: