Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding image tasks evaluation examples #2566

Merged
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
## Multiclass Classification

### List of supported keyword arguments:

| Keyword Argument | Description | Type | Sample |
|:------------------------:|:-------------------------------------------------------------------------------|------------------|-----------------------------------------------------------------|
| metrics | List for subset of metrics to be computed. All supported metrics listed below. | list<str> | ["accuracy", "f1_score_micro", "average_precision_score_macro"] |
| class_labels | List for superset of all existing labels in our dataset | list, np.ndarray | [0, 1, 2, 3], ["cat", "dog", "panda"] |
| train_labels | List for labels on which model is trained | list, np.ndarray | [0, 1, 2, 3], ["cat", "dog", "panda"] |
| sample_weights | List containing the weight associated with each data sample | list, np.ndarray | [1, 2, 3, 4, 5, 6] |
| y_transformer | Transformer object to be applied on y_pred | | |
| use_binary | Compute metrics only on the true class for binary classification | boolean | true, false |
| enable_metric_confidence | Computes confidence interval for supported metrics | boolean | true, false |
| positive_label | Label to be treated as positive label | int/str | 0, "cat" |
| confidence_metrics | List of metrics to compute confidence intervals | list<str> | ["accuracy", "f1_score_micro"] |
| custom_dimensions | Used to report telemetry data (can later be used to perform PII scrubbing) | dict | |

### List of supported metrics:

* log_loss
* average_precision_score_binary
* weighted_accuracy
* AUC_weighted
* f1_score_micro
* f1_score_binary
* precision_score_micro
* precision_score_binary
* recall_score_weighted
* f1_score_weighted
* confusion_matrix
* average_precision_score_micro
* recall_score_binary
* recall_score_macro
* average_precision_score_weighted
* AUC_binary
* matthews_correlation
* precision_score_macro
* accuracy
* average_precision_score_macro
* AUC_macro
* recall_score_micro
* balanced_accuracy
* f1_score_macro
* precision_score_weighted
* accuracy_table
* AUC_micro
* norm_macro_recall
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"metrics": ["average_precision_score_macro", "AUC_macro", "recall_score_macro", "average_precision_score_binary", "average_precision_score_micro", "AUC_binary", "recall_score_micro", "AUC_micro", "norm_macro_recall", "average_precision_score_weighted", "weighted_accuracy", "precision_score_micro", "f1_score_binary", "accuracy_table", "precision_score_macro", "f1_score_micro", "precision_score_weighted", "f1_score_weighted", "confusion_matrix", "recall_score_binary", "matthews_correlation", "log_loss", "accuracy", "precision_score_binary", "balanced_accuracy", "AUC_weighted", "f1_score_macro", "recall_score_weighted"],
"class_labels": ["can", "carton", "milk_bottle", "water_bottle"],
"train_labels": ["can", "carton", "milk_bottle", "water_bottle"],
"enable_metric_confidence": true,
"confidence_metrics": ["accuracy", "f1_score_micro"],
"use_binary": false
}
Loading