Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

4D Evaluation with "ignore" semantic predictions #17

Closed
YilmazKadir opened this issue Sep 11, 2024 · 1 comment
Closed

4D Evaluation with "ignore" semantic predictions #17

YilmazKadir opened this issue Sep 11, 2024 · 1 comment

Comments

@YilmazKadir
Copy link

Hi,
I know it has been quite some time since you published the code but it is also a warning for people who wants to submit to the SemanticKITTI 4D Panoptic Segmentation benchmark. If there is even 1 point that predicts "ignore", i.e. a semantic label of 0, then the evaluation script divides the IoU to 20 instead of 19 when calculating the mIoU. If you want to get correct results in the submission benchmark, you should not predict any ignore labels.

num_present_classes = np.count_nonzero(union)

@MehmetAygun
Copy link
Owner

MehmetAygun commented Sep 11, 2024

Thanks for reporting this @YilmazKadir!! I added a note in the readme to warn people!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants