You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a built-in way to do checkpointing on the Bayesian optimization using the GP surrogate and later recover its state, if say the application unexpectedly terminates?
One possible way could be to checkpoint the inputs/outputs and then feed all this data into the strategy and retrain the model upon restarting the application, but there is the cost of retraining and would require statically seeding the RNG. Are there any other drawbacks to this?
Alternatively, what else needs to be checkpointed? The BoTorch model?
Thanks!
The text was updated successfully, but these errors were encountered:
Hi there. Currently, you can serialize you strategy into json including your data and re-start from there coming with the drawbacks you mentioned. Everything more efficient than that is currently up to the user. Note that ENTMOOT for instance does not use BoTorch models.
Is there a built-in way to do checkpointing on the Bayesian optimization using the GP surrogate and later recover its state, if say the application unexpectedly terminates?
One possible way could be to checkpoint the inputs/outputs and then feed all this data into the strategy and retrain the model upon restarting the application, but there is the cost of retraining and would require statically seeding the RNG. Are there any other drawbacks to this?
Alternatively, what else needs to be checkpointed? The BoTorch model?
Thanks!
The text was updated successfully, but these errors were encountered: