Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

general uncertainty/optimization wrapper for any model #69

Open
bertiqwerty opened this issue Jan 23, 2023 · 5 comments
Open

general uncertainty/optimization wrapper for any model #69

bertiqwerty opened this issue Jan 23, 2023 · 5 comments
Assignees

Comments

@bertiqwerty
Copy link
Contributor

Uncertainty based on distance together with random sampling for optimization is used in basf/mbo's Random Forest implementation. This can be applied to any model that does not have uncertainty predictions and might be difficult to optimize.

@jduerholt
Copy link
Contributor

When doing so plz consider this branch: https://github.com/experimental-design/bofire/tree/feature/random_forest and this PR in botorch pytorch/botorch#1636. I hope that the botorch PR is finished before the hackathon but no guarantee. This PR will also then enable deep nn ensebmle.

@R-M-Lee
Copy link
Contributor

R-M-Lee commented Feb 2, 2023

Will first implement a random forest model and then a strategy that uses it along with random sampling for the optimization. Generalize in the next step to arbitrary models

@R-M-Lee R-M-Lee changed the title Starting with Random Forest a general uncertainty/optimization wrapper for any model general uncertainty/optimization wrapper for any model Mar 24, 2023
@R-M-Lee
Copy link
Contributor

R-M-Lee commented Mar 24, 2023

@jduerholt this has been lying around too long and getting stale so I want to get it done. What we did at the hackathon needs heavy modification due to changes in main. I want to do this but a few pointers from you would be great. What do you think of the following:

  • make a new strategy inheriting from PredictiveStrategy. Call it "RandomBruteForce" or similar
  • The strategy will take a surrogate as input (anything with .fit and .predict is ok, no prediction uncertainty needed)
  • Uncertainty is calculated by the strategy using the distance from the closest point in the training data
  • Optimization of the acqf is done by sampling a ton of random points and choosing the best
  • Multiobj using chebyshev scalarization

so the main points would be implementing min_distance, uncertainty, and ask, for which the code is essentially already there.

How do you see this? Am I missing something and what are the steps you would otherwise take?

@jduerholt
Copy link
Contributor

Sounds fine for me! How would the uncertainty enter the acqf?

@R-M-Lee
Copy link
Contributor

R-M-Lee commented Mar 27, 2023

To start with we will just have UCB fixed for this strategy. But I guess there's no reason we have to limit it (any callable that takes a mean and stdev in and outputs an acqf value - no gradients required - would be fine for this)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants