Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

create model_selection_score cmd line parameter with a few common available options #96

Open
tmills opened this issue Sep 26, 2022 · 2 comments
Assignees

Comments

@tmills
Copy link
Contributor

tmills commented Sep 26, 2022

Potential enhancements:

  • additional parameter to subset labels to average over (e.g., ignore majority label with very high score)
  • allow strings that can be passed to HF evaluate functions
@tmills
Copy link
Contributor Author

tmills commented Aug 10, 2023

Proposed solution: Two new training arguments: --model_selection_score and --model_selection_label. Example: 'acc' or 'f1' as model selection score. In compute_metrics_fn, check these arguments and set one_score based on values of those 2 arguments. No argument: Macro f1 (current default), 'f1' with no label provided, macro f1, 'acc' with no label provided, accuracy, 'acc' with label provided, error?, 'f1' with label provided, look up label index in dataset and use that label's f1.

@tmills
Copy link
Contributor Author

tmills commented Aug 10, 2023

For cases where you may want average f1 across multiple labels, --model_selection_label could take a list.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants