-
Notifications
You must be signed in to change notification settings - Fork 445
Batched Optimization For Mixed Optimization #2895
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This pull request was exported from Phabricator. Differential Revision: D76517454 |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2895 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 212 212
Lines 19794 19819 +25
=========================================
+ Hits 19794 19819 +25 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Summary: So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them. Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this. This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups). Differential Revision: D76517454
6047caf
to
add4bda
Compare
This pull request was exported from Phabricator. Differential Revision: D76517454 |
Summary: Pull Request resolved: meta-pytorch#2895 So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them. Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this. This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups). Differential Revision: D76517454
e58b997
to
35b6cb9
Compare
Summary: So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them. Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this. This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups). Differential Revision: D76517454
This pull request was exported from Phabricator. Differential Revision: D76517454 |
Summary: Pull Request resolved: meta-pytorch#2895 So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them. Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this. This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups). Differential Revision: D76517454
35b6cb9
to
060af0a
Compare
060af0a
to
9e438df
Compare
Summary: So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them. Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this. This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups). Differential Revision: D76517454
This pull request was exported from Phabricator. Differential Revision: D76517454 |
Summary: Pull Request resolved: meta-pytorch#2895 So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them. Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this. This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups). Differential Revision: D76517454
9e438df
to
f522ca8
Compare
Summary: So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them. Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this. This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups). Differential Revision: D76517454
f522ca8
to
d81b268
Compare
Summary: So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them. Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this. This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups). Differential Revision: D76517454
d81b268
to
8b7cabd
Compare
This pull request was exported from Phabricator. Differential Revision: D76517454 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D76517454 |
Summary: Pull Request resolved: meta-pytorch#2895 So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them. Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this. This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups). Differential Revision: D76517454
930d435
to
86b5659
Compare
Summary: So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them. Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this. This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups). Differential Revision: D76517454
This pull request was exported from Phabricator. Differential Revision: D76517454 |
Summary: So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them. Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this. This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups). Differential Revision: D76517454
86b5659
to
73afed1
Compare
This pull request was exported from Phabricator. Differential Revision: D76517454 |
73afed1
to
1684f19
Compare
Summary: So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them. Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this. This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups). Differential Revision: D76517454
This pull request was exported from Phabricator. Differential Revision: D76517454 |
Summary: So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them. Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this. This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups). Differential Revision: D76517454
1684f19
to
ee19fb4
Compare
This pull request was exported from Phabricator. Differential Revision: D76517454 |
Summary: Pull Request resolved: meta-pytorch#2895 So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them. Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this. This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups). Differential Revision: D76517454
ee19fb4
to
efa732c
Compare
Summary: So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them. Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this. This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups). Differential Revision: D76517454
efa732c
to
1dc5541
Compare
This pull request was exported from Phabricator. Differential Revision: D76517454 |
Summary: Pull Request resolved: meta-pytorch#2895 So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them. Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this. This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups). Differential Revision: D76517454
1dc5541
to
0d0a236
Compare
Summary: So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them. Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this. This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups). Differential Revision: D76517454
0d0a236
to
05c8a4f
Compare
This pull request was exported from Phabricator. Differential Revision: D76517454 |
Summary: Pull Request resolved: meta-pytorch#2895 So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them. Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this. This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups). Differential Revision: D76517454
05c8a4f
to
34321b5
Compare
Summary: So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them. Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this. This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups). Reviewed By: esantorella Differential Revision: D76517454
34321b5
to
60edce6
Compare
This pull request was exported from Phabricator. Differential Revision: D76517454 |
This pull request has been merged in 0eea0b7. |
Summary:
So far, our optimization in mixed search spaces work on each restart separately and sequentially instead of batching them.
Here, we change this to batch the restarts, based on the new l-bfgs-b implementation that supports this.
This speeds up mixed search spaces a lot (depending on the problem around 3-4x speedups).
Differential Revision: D76517454