-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallelized triangulated meshes in DVGeometryMulti #224
Conversation
Codecov Report
@@ Coverage Diff @@
## main #224 +/- ##
==========================================
+ Coverage 64.78% 65.02% +0.24%
==========================================
Files 47 47
Lines 12073 12106 +33
==========================================
+ Hits 7821 7872 +51
+ Misses 4252 4234 -18
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great stuff
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice
Purpose
This PR parallelizes the triangulated meshes in DVGeometryMulti by splitting up the mesh points per processor and embedding a subset of the points on each processor. All triangulated mesh points are still stored on all processors because they are needed for pySurf operations. However, the parallelized embedding vastly improves memory usage during derivative computation for large cases and speeds up the embedding step.
I also fixed a minor parallelization bug when using
excludeSurfaces
. I ran into this while writing a parallelized version of the DVGeometryMulti test, so I included the fix in this PR.Expected time until merged
2-3 weeks
Type of change
Testing
I parallelized the main DVGeometryMulti test and parameterized the test to run on 1 and 3 processors. The coordinates and derivatives match between the serial and parallel cases. The reference file changed because I switched from
par_add_val
toroot_add_val
, but the values are identical.In addition, to test performance on a realistic case, I ran the DLR-F6 adjoint computation and recorded the memory usage for the
DVGeo.totalSensitivity
call. The memory usage scales much better with the parallelized triangulated meshes:Checklist
flake8
andblack
to make sure the Python code adheres to PEP-8 and is consistently formattedfprettify
or C/C++ code withclang-format
as applicable