-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Common Hyper-parameter Tuning Libs #60
Comments
Nevergrad: https://github.com/facebookresearch/nevergrad Support for SM?: https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html |
@dorukkilitcioglu would love to hear any feedback/ideas you have on this. Hoping to tackle in the next few weeks. |
In terms of Black-Box Optimization, I've been looking into Nevergrad and Ax, both seem pretty well equipped, though I prefer Nevergrad at this point. In general, are you looking to map a spock config to the parameter classes that these tools use? So you only write your spock config, ask spock to tune it using Optuna (for example), give it a budget, and when you come back you have 100 different runs with their associated spock configs and some objective value. You could also start small and use random search as the initial POC, to figure out how you want to handle the input/output of this process. |
My current thought is to be tool run agnostic and just provide an adapter to each supported backend that returns whatever structure is needed for the parameter ranges/types/scales etc. Basically allow you to specify all the ranges etc with spock, regardless of backend, using something like a new decorator (e.g. The only things this kinda breaks with spock is the ability to save the state of each hyperparameter 'run' config as the backend library will be handling the evolution of the parameter set. But that might be overkill as you might only want to save the range configs and then the final config... |
Or as you alluded to we could just fully wrap 1 or 2 backends to be set and forget... Not sure which is the better option |
@dorukkilitcioglu can you peak at #62 and see what you think? There is a simple example here that shows the basic syntax. Lmk |
Looking at the simple example, there's a lot going on. I like it overall, and I feel like it looks more dense than it actually is because Logistic Regression training is super easy and it's as if half the code is tuning :D My understanding is that there are multiple steps you have to take:
Also, does the I think there's a slight room for simplification (why can't |
Yeah LR might be a bit under-kill as an example but I was just mimicking some optuna docs for simplicity..
There is a difference between
All good here..
Yeah, the two options to get a spockspace back are
Logically yes, however since
It's kinda tuner agnostic in its current state. The return object in the second position from the
Not sure... this comes from the fact that in order to sample with the define-and-run style interface in optuna you need the study object but also when calling tell you need the currently generated trial and the study. Seemed easiest to package that up into a dict...
Good point. Haven't dealt with the results, etc yet and how that's handled. Only the fact that the |
Forgot to address this one. Actually makes sense. I think I can just add it as an |
* Common hyperparameter tuning interface #60 * Added Optuna support * Refactored backend to support split of fixed and tuneable parameters * Added black/isort * Handles usage pattern of drop-in argparser replacement where no configs (from cmd line or as input into ConfigArgBuilder) are passed thus falling back on all defaults or definitions from the command line. fix-up of all cmdline usage pattern. there were certain edge cases that were not getting caught correctly if it wasn't overriding an existing payload from a yaml file. #61 * Unit tests Signed-off-by: Nicholas Cilfone <nicholas.cilfone@fmr.com>
Some sort of Adapter pattern to interface with some of the more common hyper-parameter tuning libraries.
NNI: https://github.com/microsoft/nni
Optuna: https://github.com/optuna/optuna
Talos: https://github.com/autonomio/talos
HyperOpt: https://github.com/hyperopt/hyperopt
Katib (supports k8s Job and MPIJob CRD): https://github.com/kubeflow/katib
The text was updated successfully, but these errors were encountered: