Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve performance of Arrhenius rate evaluations #1217

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

speth
Copy link
Member

@speth speth commented Mar 14, 2022

Changes proposed in this pull request

  • Use Eigen to vectorize rate evaluations
  • Only do calculations for rates that are actually temperature-dependent

If applicable, provide an example illustrating new features this pull request is introducing

Testing this on a couple of different machines shows a 7-9% speed increase for an adiabatic flame speed calculation using GRI 3.0 (using a modified version of adiabatic_flame.py, and about a 4% speed increase for the custom-reactions.py benchmark:

Before:

Average time of 100 simulation runs for 'gri30.yaml' (CH4)
- New framework (YAML): 59.38 ms (T_final=2736.60)
- One Python reaction: 60.01 ms (T_final=2736.60) ... +1.07%
- Old framework (XML): 59.07 ms (T_final=2736.60) ... -0.52%

After

Average time of 100 simulation runs for 'gri30.yaml' (CH4)
- New framework (YAML): 57.02 ms (T_final=2736.60)
- One Python reaction: 57.67 ms (T_final=2736.60) ... +1.14%
- Old framework (XML): 59.04 ms (T_final=2736.60) ... +3.53%

The bulk of the benefit seems to come from skipping rate updates for rates that are constant (for reference, there are 100 such rates in the GRI 3.0 mechanism). I'm not quite sure why vectorizing the the rate evaluation for the remaining reactions has so little impact.

Checklist

  • The pull request includes a clear description of this code change
  • Commit messages have short titles and reference relevant issues
  • Build passes (scons build & scons test) and unit tests address code coverage
  • Style & formatting of contributed code follows contributing guidelines
  • The pull request is ready for review

- Use Eigen to vectorize rate evaluations
- Only do calculations for rates that are actually temperature-dependent
@codecov
Copy link

codecov bot commented Mar 14, 2022

Codecov Report

Merging #1217 (46b3f27) into main (719275e) will increase coverage by 0.02%.
The diff coverage is 85.00%.

@@            Coverage Diff             @@
##             main    #1217      +/-   ##
==========================================
+ Coverage   65.43%   65.46%   +0.02%     
==========================================
  Files         320      320              
  Lines       46249    46308      +59     
  Branches    19657    19687      +30     
==========================================
+ Hits        30265    30315      +50     
- Misses      13454    13459       +5     
- Partials     2530     2534       +4     
Impacted Files Coverage Δ
include/cantera/kinetics/Arrhenius.h 82.47% <ø> (ø)
include/cantera/kinetics/MultiRateBase.h 100.00% <ø> (ø)
src/kinetics/Arrhenius.cpp 93.50% <83.63%> (-5.49%) ⬇️
include/cantera/kinetics/MultiRate.h 84.21% <100.00%> (ø)
src/kinetics/BulkKinetics.cpp 88.97% <100.00%> (+0.17%) ⬆️
src/kinetics/InterfaceKinetics.cpp 75.90% <100.00%> (+0.10%) ⬆️

📣 Codecov can now indicate which changes are the most critical in Pull Requests. Learn more

@ischoegl
Copy link
Member

ischoegl commented Mar 17, 2022

This is an interesting concept. Presumably, vectorization could be done for each/most of the other ReactionRate specialization also? Fwiw, my recent tests on #1219 indicate that there should be gains whenever updateFromStruct can be avoided for individual rate evaluators. Given that we have a lot of features already accumulated in 2.6.0a4, I'm wondering whether to include this now or for 3.0.

As an aside, I'm surprised that your speed tests put the CTI/XML ahead of YAML. I currently have YAML with an edge of around 4% on main. It's likely compiler/hardware dependent, though.

@ischoegl
Copy link
Member

ischoegl commented Jul 24, 2022

@speth … do you think that the speedup arises from vectorization via Eigen or from omitting calculations? If I understand correctly, only the former creates complications for the test you ran in the context of #1211 (comment)?

@speth
Copy link
Member Author

speth commented Aug 26, 2022

@speth … do you think that the speedup arises from vectorization via Eigen or from omitting calculations? If I understand correctly, only the former creates complications for the test you ran in the context of #1211 (comment)?

As I said in the initial PR description, the bulk of the benefit seems to come from skipping rate updates for rates that are constant.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
No open projects
Development

Successfully merging this pull request may close these issues.

2 participants