-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Machinery for packing sparse variables #692
Conversation
…nsion calculation
@forrestglines: I think this is ready for your re-review at this point. If you are ok with it, go ahead and merge. Also added an issue #697 to add documentation for sparse packs |
Is this ready to merge? I think it is. Do we need to do something to get that "ci/gitlab/gitlab.com" check to go through? |
Yeah, I think it should automatically merge once the gitlab CI runs, but that CI is broken right now for everything. There is a way to force a merge I think, but I am not sure how that interacts github. |
As I've been testing out using this pack for face and edge centered fields, I've realized that the index order for
In my head, the following organization seems more intuitive
So that the first two indicies are to index the variable and the last four are to index the location in space. |
I am not sure what the best way to order this is. I chose the current ordering for regular variables to be the same as |
I agree, that sounds correct. However, that will change with face and edge centered fields. I considered adding another
I suspect that too. I think the |
PR Summary
This pull request is for a new class
SparsePack
that packs sparse data across blocks in a way that eliminates the need to check the allocation status of each variable. This duplicates some of the functionality of the other types of packs already in Parthenon and in one of @jdolence branches, but has the following nice features:unordered_map<string, SparseVarIdx>
and a type based method for accessing variables within a kernel (see thetest_sparse_pack.cpp
unit test for examples).SparsePack
s can be created fromMeshData
orMeshBlockData
.SparsePackCache
type to internally handle cacheing.SparsePackCache
member variable inMeshData
andMeshBlockData
(and an associatedGetSparsePackCache()
method), methods relating toSparsePack
are not included inMeshData
andMeshBlockData
.ParArray3D<ParArray3D<Real>>
with the first index distinguishing between the variable and its fluxes, the second the block index, and the third the variable index on the block. Since the number of variables differs between blocks when things are sparsely allocated, the size of the last dimension is the maximum number of variables allocated on any block. This means there is sometimes space in the array that is unused, but I doubt this will be a real major issue unless it creates too large of adeep_copy
.Currently, I have changed
FluxDivergence(MeshData<Real>*, MeshData<Real>*)
to useSparsePack
s just to test if this breaks anything. Not surprisingly, the exact choice of loop pattern substantially impacts performance. The chosen loop pattern seems to do as well as the old one on theregression_test:advection_performance
test, but maybe there is a better way. Aside from this, this PR should not impact any aspect of the code and the change toFluxDivergence
could be reverted.I also removed the
<regex>
header from the list of unapproved headers incpplint.py
. As far as I am aware, there isn't a really good reason for having it there (see google/styleguide#194). That being said, a cursory glance around the internet suggested thatboost::regex
might be superior performance wise.PR Checklist