-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add metric mFscore #509
add metric mFscore #509
Conversation
Codecov Report
@@ Coverage Diff @@
## master #509 +/- ##
==========================================
+ Coverage 86.48% 86.69% +0.20%
==========================================
Files 97 99 +2
Lines 4974 5192 +218
Branches 807 838 +31
==========================================
+ Hits 4302 4501 +199
- Misses 519 533 +14
- Partials 153 158 +5
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
Nice PR! Thx! |
Hi @sshuair |
mmseg/core/evaluation/metrics.py
Outdated
@@ -146,7 +166,7 @@ def mean_iou(results, | |||
nan_to_num=nan_to_num, | |||
label_map=label_map, | |||
reduce_zero_label=reduce_zero_label) | |||
return all_acc, acc, iou | |||
return mIoU_result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return mIoU_result | |
return iou_result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may use snake case for all variables.
mmseg/core/evaluation/metrics.py
Outdated
@@ -185,7 +206,52 @@ def mean_dice(results, | |||
nan_to_num=nan_to_num, | |||
label_map=label_map, | |||
reduce_zero_label=reduce_zero_label) | |||
return all_acc, acc, dice | |||
return mDice_result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return mDice_result | |
return dice_result |
mmseg/datasets/custom.py
Outdated
|
||
summary_table_data = PrettyTable() | ||
for key, val in ret_metrics_summary.items(): | ||
summary_table_data.add_column(key, [val]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may also use term mIoU
in the table.
mmseg/core/evaluation/metrics.py
Outdated
<aAcc> float: Overall accuracy on all images. | ||
<Acc> ndarray: Per category accuracy, shape (num_classes, ). | ||
<Dice> ndarray: Per category dice, shape (num_classes, ). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may indent here.
mmseg/core/evaluation/metrics.py
Outdated
<aAcc> float: Overall accuracy on all images. | ||
<Fscore> ndarray: Per category recall, shape (num_classes, ). | ||
<Precision> ndarray: Per category precision, shape (num_classes, ). | ||
<Recall> ndarray: Per category f-score, shape (num_classes, ). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may use space 4 indent here.
@sshuair Very nice PR. Just a few comments. |
@xvjiarui the comments has been fixed. Please check it out. .....
| person | 24.65 | 63.46 | 15.3 | 14.06 | 15.3 | 24.65 |
| pottedplant | 0.23 | 30.68 | 0.12 | 0.12 | 0.12 | 0.23 |
| sheep | 0.81 | 17.06 | 0.41 | 0.4 | 0.41 | 0.81 |
| sofa | 15.42 | 12.16 | 21.04 | 8.35 | 21.04 | 15.42 |
| train | 9.86 | 5.48 | 48.95 | 5.19 | 48.95 | 9.86 |
| tvmonitor | 3.23 | 5.39 | 2.31 | 1.64 | 2.31 | 3.23 |
+-------------+--------+-----------+--------+-------+-------+-------+
2021-04-30 13:55:06,861 - mmseg - INFO - Summary:
2021-04-30 13:55:06,861 - mmseg - INFO -
+-------+---------+------------+---------+------+-------+-------+
| aAcc | mFscore | mPrecision | mRecall | mIoU | mAcc | mDice |
+-------+---------+------------+---------+------+-------+-------+
| 32.62 | 10.72 | 20.85 | 16.05 | 5.76 | 16.05 | 9.69 |
+-------+---------+------------+---------+------+-------+-------+
2021-04-30 13:55:06,864 - mmseg - INFO - Iter(val) [10] aAcc: 0.3262....
..... |
* add mFscore and refactor the metrics return value * fix linting * some docstring and name fix
Hi! Isn't mFscore in the current implementatin the same as mDice score, since the default beta=1 is used? Am I mistaken? |
* Finally fix the image-based SD tests * Remove autocast * Remove autocast in image tests
* resolve comments * update changelog * add class_weight in loss arguments * switch to mmcv 1.2.4 * use v1.1.1 as mmcv version lower bound * reorganize code * resolve comments
This PR contributed a new feature: support for
f-score
,recall
andprecision
evaluation metrics. Issued by #420There are three main modifications:
mFscore
metric, it contain three sub-metrics,f-score
,recall
andprecision
.metrics.py
return value from tuple to dict.custom.py
evaluate
method log usingprettytable
package instead ofterminaltables
, because theterminaltables
is archived and no longer maintained.And the logs look like this: