-
Notifications
You must be signed in to change notification settings - Fork 940
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
benchmarks.yml: Run on addition of label instead of comment #2002
Conversation
Instead of triggering on a specific comment, which caused issues with running in a different scope (issue_comment instead of pull_request_target), the workflow can now be triggered by adding a label "trigger-benchmarks" to the PR.
Performance benchmarks:
|
If this works this is probably a good solution. While your changing the trigger, maybe also now deactivate the trigger on open pull requests? I think a lot of PRs a not related to performance and the performance benchmark is a bit distracting there. Or maybe run on every PR, but don't comment without the label? |
What about reacting to the existing benchmark comment with a rocket emoji to retrigger the benchmark? Figuring out a more convenient way to do the retrigger than removing and readding the label. |
@rht Good idea, but unfortunatly this only works if it's specifically in the PR scope. Everything related to comments isn't. Just an limitation of events in GitHub Actions. A label is related to the PR scope, so thus it works. @Corvince I like the explicit confirmation that a PR doesn't change performance. But I also recognize it can add noise. Let me think about it a bit. |
I also like the explicit performance check. Another hidden benefit is that it is a form of integration testing because we run three full models through the benchmark. Some of the weird backward compatibility issues we have seen would have been caught if this machinery had been in place. |
I can’t think of a more elegant solution than using the label, consider the restrictions of GitHub Actions. In #1999 I ensured the benchmarks don’t change if there weren’t Python code changes. That should help a little against the noise. I would be in favor of merging this. |
looks good to me |
Instead of triggering on a specific comment, which caused issues with running in a different scope (
issue_comment
instead ofpull_request_target
), the workflow can now instead be triggered by adding a label "trigger-benchmarks" to the PR.The label can be repeatedly added and removed to rerun benchmarks.