Added new generic optimization API #1028
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR adds a newly created optimization API to build massively-parallel (particle based) optimizers. The current PR includes a set of classes to build custom optimizers while sharing a unified problem-focused API surface.
The optimization classes support vectorized IO and computation operations using the type parameters
TNumericType
andTElementType
.TElementType
refers to the underlying scalar type, likefloat
.TNumericType
has to implement theIVectorizedType
interface to support combined vectorized types likeFloat32x4
.Users can implement their own objective function by implementing to
IOptimizationFunction
interface as shown below:Note that this code uses a specialized implementation operating on
Float32x2
vectors.The high level idea is to derive your custom optimizer from
OptimizationEngine<...>
. Such an engine is meant to manage all buffers and most of the reusable kernels. An engine is then used to create a specializedOptimizer
instance to take a given objective function into account. Using this design allows to reuse the same memory allocations with different objective functions.A sample use case of an imaginary optimization engine is shown here:
Note that this PR requires PR #1022, PR #1023, and PR #1027.