Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support to configure executor for benchmark #634

Merged
merged 4 commits into from
Sep 1, 2021

Conversation

donglinjy
Copy link
Collaborator

Description

Instead of global executor configure, support to set executor at benchmark level.
fix issue: #624

Changes

Adding parameter executor of type orion.executor.base.Executor to run the benchmark experiments.

def get_or_create_benchmark(
    name, algorithms=None, targets=None, storage=None, executor=None, debug=False
):

Checklist

Tests

  • I added corresponding tests for bug fixes and new features. If possible, the tests fail without the changes
  • All new and existing tests are passing ($ tox -e py38; replace 38 by your Python version if necessary)

Documentation

  • I have updated the relevant documentation related to my changes

Quality

  • I have read the CONTRIBUTING doc
  • My commits messages follow this format
  • My code follows the style guidelines ($ tox -e lint)

@bouthilx
Copy link
Member

Seems good! Only tests are missing. I think next step after this PR would be to use the executor to parallelize the experiments execution instead of parallelising single experiments sequentially. (All experiments at the same time vs one experiment with parallel workers)

@donglinjy donglinjy marked this pull request as ready for review August 22, 2021 14:40
config["executor"] = executor
bm2 = get_or_create_benchmark(**config)

assert bm2.configuration == benchmark_config
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think executor should be part of the config? If this affects parallelism and results it may be better to include it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought this a little bit. I took it out for 2 consideration:

  1. this configure is primary useful when user run the one benchmark on multiple workers(manually create benchmark separately and launch it). It should be okay that at different worker use different executor configure.
  2. this configure will be persisted, and now we pass-in a executor object directly instead of the executor configure, to do the persist, need to change this or ask executor return a configure for persistence.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think we would need to persist the configuration in the case of the parallelization assessment? I seems to me it would be better to. Maybe that can be another PR, since it would require supporting configuration of the executor through dict arguments instead of simply an instantiated object.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For parallelization assessment, yes, I think we need to pass-in executor name and configure instead of an instance for persistence, this is because it is part of the benchmark and should not change when we launch same benchmark at different nodes.

Copy link
Member

@bouthilx bouthilx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! :) Just a few minor comments to address/discuss before ready for merge.

@bouthilx bouthilx added the enhancement Improves a feature or non-functional aspects (e.g., optimization, prettify, technical debt) label Aug 23, 2021
@bouthilx bouthilx added this to the v0.1.17 milestone Aug 23, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Improves a feature or non-functional aspects (e.g., optimization, prettify, technical debt)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants