-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve caching of MF geometry across runs #38737
Conversation
+code-checks Logs: https://cmssdt.cern.ch/SDT/code-checks/cms-sw-PR-38737/31051
|
A new Pull Request was created by @namapane (Nicola Amapane) for master. It involves the following packages:
@jpata, @cmsbuild, @clacaputo can you please review it and eventually sign? Thanks. cms-bot commands are listed here |
@cmsbuild please test |
+1 Summary: https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-dd6324/26311/summary.html Comparison SummarySummary:
|
Hi @namapane , just for my education, how exotic are these cases? Do we have any occurrences in the past? |
The scenario would be: an online or offline process running over several runs taken at different values of the magnet current, which cross the 2T-3T working points. It's hard to think of a case where this may happen; it could maybe be HLT online jobs for data taken during a magnet ramp -unless jobs are started from scratch when a new run is start in this case. In any case, the fix would have to be in DD4HEP, to allow rebuilding a geometry in the same process. @civanch may be able to comment better. |
@namapane , @clacaputo , I think that it is not needed to duplicate issue #38669. |
+reconstruction
|
This pull request is fully signed and it will be integrated in one of the next master IBs (tests are also fine). This pull request will now be reviewed by the release team before it's merged. @perrotta, @dpiparo, @qliphy, @rappoccio (and backports should be raised in the release meeting by the corresponding L2) |
+1 |
PR description:
Added a protection to check that the MF geometry cache introduced in #38640 is still valid.
Addresses item 1) in issue #38669.
Also added a test to siumlate a job spanning over several runs with different magnet currents, triggering the cache in different conditions. It can be run with:
MagneticField/Engine/test/regression.py producerType=fromDB_DD4hep current=-1
As expected, the cache works when switching among maps with the same geometry (3, 3.5 and 3.8 T; or 0 and 2 T).
When switching between 3 and 2 T, the geometry needs to be redone, which as expected leads to a crash in dd4hep.
This can only be fixed upstream; however, this is a very exotic use case (ie job spanning different runs taken while the magnet was ramping). In any case, the test can be used to keep track of this potential issue.