-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support to configure executor for benchmark #634
Support to configure executor for benchmark #634
Conversation
Seems good! Only tests are missing. I think next step after this PR would be to use the executor to parallelize the experiments execution instead of parallelising single experiments sequentially. (All experiments at the same time vs one experiment with parallel workers) |
config["executor"] = executor | ||
bm2 = get_or_create_benchmark(**config) | ||
|
||
assert bm2.configuration == benchmark_config |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think executor should be part of the config? If this affects parallelism and results it may be better to include it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought this a little bit. I took it out for 2 consideration:
- this configure is primary useful when user run the one benchmark on multiple workers(manually create benchmark separately and launch it). It should be okay that at different worker use different executor configure.
- this configure will be persisted, and now we pass-in a executor object directly instead of the executor configure, to do the persist, need to change this or ask executor return a configure for persistence.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think we would need to persist the configuration in the case of the parallelization assessment? I seems to me it would be better to. Maybe that can be another PR, since it would require supporting configuration of the executor through dict arguments instead of simply an instantiated object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For parallelization assessment, yes, I think we need to pass-in executor name and configure instead of an instance for persistence, this is because it is part of the benchmark and should not change when we launch same benchmark at different nodes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! :) Just a few minor comments to address/discuss before ready for merge.
Description
Instead of global executor configure, support to set executor at benchmark level.
fix issue: #624
Changes
Adding parameter executor of type
orion.executor.base.Executor
to run the benchmark experiments.Checklist
Tests
$ tox -e py38
; replace38
by your Python version if necessary)Documentation
Quality
$ tox -e lint
)