-
-
Notifications
You must be signed in to change notification settings - Fork 16.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix mAP calculations #7714
Fix mAP calculations #7714
Conversation
This commit fixes mAP calculations. Oritinally, there were extraploations in recall, precision, and PR cureve generations.
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👋 Hello @comlhj1114, thank you for submitting a YOLOv5 🚀 PR! To allow your work to be integrated as seamlessly as possible, we advise you to:
- ✅ Verify your PR is up-to-date with upstream/master. If your PR is behind upstream/master an automatic GitHub Actions merge may be attempted by writing /rebase in a new comment, or by running the following code, replacing 'feature' with the name of your local branch:
git remote add upstream https://github.com/ultralytics/yolov5.git
git fetch upstream
# git checkout feature # <--- replace 'feature' with local branch name
git merge upstream/master
git push -u origin -f
- ✅ Verify all Continuous Integration (CI) checks are passing.
- ✅ Reduce changes to the absolute minimum required for your bug fix or feature addition. "It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is." -Bruce Lee
@comlhj1114 I appreciate the effort here, but the existing implementation is in place because there is no such thing as a valid mAP calculation that does not extend all the way from near zero to 1.0 confidence. It's simply a user error to assume you can compute mAP metrics at anything other than near zero |
@glenn-jocher Thank you for your quick reply! |
@comlhj1114 SAHI is a derivative work of YOLOv5, there's no reason for us to modify our behavior to theirs. If anything it should be the other way around. |
@comlhj1114 also what's missing here is any before and after change results. This modification will clearly affect metrics, yet I see no testing to quantify the before and after of what you've proposed. |
@comlhj1114 just to be perfectly clear, this thing you have written "mAP@0.50 (conf-thres=0.25) : 0.687" is not a valid metric. By definition it can not exist. It's like me saying "the temperature on May negative-3 is 25C." It is impossible to compute mAP as anything other than the integral of the PR curve that extends from confidence=0 to confidence=1. |
@glenn-jocher |
My PR intends to follow cocoapi's mAP calculations. |
@comlhj1114 pycocotools is already applied automatically to compute COCO mAP, and if you have a valid COCO-format JSON for a custom dataset then you can also supply it to val.py: That being said, pycocotools is about 100 times slower than the YOLOv5 mAP code, so if you like waiting around a hundred times longer than you need to then use pycocotools as much as you'd like. |
@glenn-jocher Speed optimization is always right. |
After download coco dataset, I'll test and share the results. |
@comlhj1114 ok! You can also validate COCO using these two cells in the Colab notebook (after running Setup cell): |
@glenn-jocher Thank you for your support! |
Hi @comlhj1114 and @glenn-jocher , If you want to easily use coco128 as a test, you can use this https://github.com/zhiqwang/yolov5-rt-stack/releases/download/v0.3.0/coco128.zip , here I converted the yolov5 coco128 txt format to coco json for testing on cocoapi. |
@zhiqwang Thanks for your support :) |
@glenn-jocher this is my results. |
@glenn-jocher |
@glenn-jocher |
@comlhj1114 ok thanks for the results! It looks like under the default scenario the PR is reducing the YOLO mAP further away from pycocotools. We really need solutions that go in the opposite direction to close the gap. We even have an open competition on the topic here: #2258 If we are using pycocotools as the standard then ideally the YOLO mAP calculation should also be 0.506, yet for some reason it is coming in lower on COCO for us, i.e. 0.496 in master and 0.493 in this PR. |
#7732 is much better solution. |
@comlhj1114 got it! Thanks for your update. If you have any other questions or need further assistance, feel free to ask. Good luck with your future evaluations! |
Fix mAP calculations.
Originally, there were extrapolations to generate precision curve, recall curve, and PR curve.
The extrapolations generate large differenced with cocoapi.
This commit removes extrapolations to accurately calculate mAP, especially false-negative related mAP errors when conf-thres is high.
related issue : #1466
Dataset: coco128
Model: yolov5s.pt
Original
mAP@0.50 (conf-thres=0.001) : 0.719
mAP@0.50 (conf-thres=0.25) : 0.687
Fixed
mAP@0.50 (conf-thres=0.001) : 0.712
mAP@0.50 (conf-thres=0.25) : 0.594
Original Curves (conf-thres=0.25)



Fixed Curves (conf-thres=0.25)



🛠️ PR Summary
Made with ❤️ by Ultralytics Actions
🌟 Summary
Improved precision-recall curve interpolation and AP calculation for object detection metrics.
📊 Key Changes
right=0
for both recall and precision inap_per_class
function.compute_ap
function for precision-recall curves.🎯 Purpose & Impact
right=0
ensures that extrapolation for values outside the data range is consistent, avoiding assumed default values.