-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No "LHCInfoRcd" record found in the EventSetup.n (CTPPSProtonProducer/'ctppsProtons') #32340
Comments
assign dqm, alca |
New categories assigned: dqm,alca @jfernan2,@christopheralanwest,@andrius-k,@fioriNTU,@tlampen,@pohsun,@yuanchao,@tocheng,@kmaeshima,@ErnestaP you have been requested to review this Pull request/Issue and eventually sign? Thanks |
A new Issue was created by @silviodonato Silvio Donato. @Dr15Jones, @dpiparo, @silviodonato, @smuzaffar, @makortel, @qliphy can you please review it and eventually sign/assign? Thanks. cms-bot commands are listed here |
@mundim |
@jan-kaspar @ forthommel @nminafra @AndreaBellora @popovvp as CT-PPS DQM developers, can you please have a look? |
There is no MC tag of record type |
Yes, it look like the record are supposed to be taken from GT https://github.com/cms-sw/cmssw/blob/master/CalibPPS/ESProducers/python/ctppsLHCInfo_cff.py#L3 (@jan-kaspar) |
unassign dqm |
assign reconstruction After having removed the DQM modules, we still get
(actually the problem looks like to be a missing tag in the GT) |
the previous discussion points to the content of the GT and CalibPPS software. Why is this a reco issue? |
@wpcarvalho @malbouis might be related to this topic. |
unassign reconstruction |
@christopheralanwest I see from #26394, that #26415 (@tocheng) added LHCInfo only in Run1 and Run2 |
urgent this issue is blocking the validation of |
You can reproduce the error also in this way (in
|
Let me add some potentially relevant info. The LHCInfo is a part of conditions essential for PPS. Among others, the LHCInfo contains the LHC xangle (or crossing angle) which influences many aspects of proton propagation from the IP to the PPS detectors (RPs). This info is both important for data and simulation. For LHC data, the LHCInfo should be stored in DB, as it was provided by LHC. Let me emphasize that this info changes during every LHC fill, thus is time dependent. For simulation, in principle we may store the info in DB, too. However, due to the time-dependent nature, it is somewhat difficult. We wish the MC conditions to be compatible with the LHC ones. In order to prepare the MC payloads accordingly, we would need to know the number of events/LS to be used in the MC simulation. And this is often not known or even variable. Therefore, we tend to prefer another option: to have an ES module which generates the conditions on fly (based on fundamental ingredients which can be stored in DB). This new ES module is in now in PR #32207. |
@cms-sw/alca-l2 do you have a rough estimate of the timescale for the new global tag? I would like to start the RelVals before the weekend. I prepared #32346 in case it is not possible to have the global tag in time. |
@jan-kaspar and @fabferro agreed to remove PPS from Run-3 reco (#32207 (comment)). So this issue is temporarily solved by #32346 and #32352 |
Why is this a problem only for Run 3 workflows? There is no |
Hi @christopheralanwest, we are implementing a different way to get the optics information in order to have the most accurate representation of the real optics as possible. @jan-kaspar can give more detailed information... thanks |
I'm not sure to understand the arguments. |
This is exactly what is difficult for PPS - because in reality (LHC) the conditions do vary. If we wish the simulation to be realistic, we need to split the MC data into chunks and for each chunk use different set of conditions (both for simu and reco). As in LHC data, a chunk was acquired with some value of xangle, another chunk with another xangle. The proposed solution (in a simplified manner) to fullfil our needs (varying conditions) within the existing constraints (single IOV) is to extract from LHC data the distribution of relevant parameters (e.g. xangle) and strore them in DB (single IOV is sufficient). Then we introduce a ES module which, every given number of lumisections, will generate a random xangle according to the distribution extracted from data. With sufficient number of xangle samples, the simu will be done with reasonably similar xangle distribution. Is our idea any clearer now? |
Is this another use case to converge on an IOV-based MC ? |
no, not really. I'm not sure if the situation is that much different from anything else in CMS, conditions vary for all detectors; ECAL has perhaps the most significant variation of response vs time (every fill) and we are still OK (not perfect and can do better) with MC having just one payload. Indeed, run/IOV-based MC strategy would improve the agreement with data, but I do not see a conceptual difference wrt other detectors. |
Conceptually, I can imagine the situation is similar for every sub-detector. What may be different is the size of the variations. For PPS, different xangles can mean sizeable difference in acceptance, for instance. AFAIK, we PPS don't think that a single set of conditions is sufficient - I've asked the Proton POG conveners to support this (personal) statement. |
As Jan Kaspar already mentioned, we need some sort of dynamically generated conditions. As an example, the crossing angle changes continually during a fill (by steps of 1urad or so). The crossing angle affects in the simulation where a forward proton will end up in the detector downstream. |
considering that the cost of running ctppsProtons is fairly small, would it be still useful to have low, middle, and high points (to be present in GT with different labels or via a derived ES producer) and produce consistently 3 variants of protons? |
We are open to suggestions, but I still do not see how we can have a representative MC simulation with a small number of working points. This is why we went in the direction of the random conditions. |
I disagree with the analogy: pileup is intrinsically different event by event. |
Yes, correct, but the crossing angle is still rapidly varying. |
As far as I know, there is one lumi section per (GEN-SIM?) job in production and the frequency of lumi section transitions is not independently configurable. There is ongoing work to develop run-dependent MC according to the recommendations of the Time Dependent MC Working Group, which uses a similar method of generating run dependence based on lumi-sections. An example of the implementation of time-dependent conditions can be found in PR #28214. @Dr15Jones can provide additional information about the time-dependent MC implementation. That said, I don't understand why you have chosen an implementation based on lumi-sections rather than random distributions of the relevant quantities. For run-dependent MC, the primary difficulty with random sampling is that one needs the conditions with which the pileup distribution is generated to match that used in the simulation of the rest of the event. Is that relevant here? I suggest that we have a meeting that includes all relevant groups. We can use the AlCaDB meeting on Monday at 16:00 for this purpose. Would that work for everyone? |
Maybe its possible to illustrate the physics impact of using an average condition vs something more complex? This is the same situation as all detectors fwiw.
… On Dec 1, 2020, at 4:12 PM, jan-kaspar ***@***.***> wrote:
I'm not sure to understand the arguments.
MC (so far, at least) has only one IOV, there is no time dependence.
This is exactly what is difficult for PPS - because in reality (LHC) the conditions do vary. If we wish the simulation to be realistic, we need to split the MC data into chunks and for each chunk use different set of conditions (both for simu and reco). As in LHC data, a chunk was acquired with some value of xangle, another chunk with another xangle.
The proposed solution (in a simplified manner) to fullfil our needs (varying conditions) within the existing constraints (single IOV) is to extract from LHC data the distribution of relevant parameters (e.g. xangle) and strore them in DB (single IOV is sufficient). Then we introduce a ES module which, every given number of lumisections, will generate a random xangle according to the distribution extracted from data. With sufficient number of xangle samples, the simu will be done with reasonably similar xangle distribution.
Is our idea any clearer now?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
Hi everyone. Can we postpone this discussion to after a meeting already booked between the PPS people involved, please? There are some aspects involved that still need some internal discussion. |
I would call this "metaconditions" and I think it could give the result that PPS needs within the constraints that the simulation conditions have .
|
@christopheralanwest Many thanks for the detailed information and apologies for the silence - yesterday we had a discussion within PPS on how to continue. We decided to have two lines of action:
We appreciate your invitation for discussion. The next Monday (7 Dec) seems a bit too tight. What about the next one (14 Dec)? A quick answer to your questions. Currently, we have all conditions data in EventSetup. AKAIK, CMSSW only allows updating ES data at LS boundaries. That's why our choice. We have checked that typical CMS simulations have enough LS to reasonably sample our condition distributions. Then, thanks for pointing out the possible correlation with PU. Indeed, LHC introduces PU correlation with xangle - both decrease with time, PU due to burn off, xangle due to the choice of lumi-levelling scheme. We think that it is interesting to include this effect to our investigations. |
Just another comment on top of @jan-kaspar. We have agreed upon a strategy to provide a db tag to be included in the GT for the simulation with the desired condition AND following the current convention. No new code will be needed from the full simulation side apart from un update in a config file. We hope to have this in place soon, but it will take a couple of weeks (likely). |
As reported from @cms-sw/pdmv-l2, many HIN and Run-3 workflows are crashing after ~130 events.
https://cms-unified.web.cern.ch/cms-unified/report/mmeena_RVCMSSW_11_2_0_pre10TTbar_14TeV__rsb_201129_121403_983
https://cms-unified.web.cern.ch/cms-unified/report/mmeena_RVCMSSW_11_2_0_pre10QCD_Pt_80_120_14_HI_2021_PU__rsb_201129_122841_2740
You can easily reproduce the error by copying
/afs/cern.ch/work/s/sdonato/public/debug_PPS/
in your folder and runcmsRun PSet.py
(I selected a single event causing the crash)LHCInfoRcd should be produced by CTPPSLHCInfoRandomXangleESSource (see https://github.com/cms-sw/cmssw/pull/28492/files#diff-d435950ce350dde1efbc324448a77f75894e0f7027a444503d500a4a93827ee1R30)
PPS was added to DIGI by #32003
The text was updated successfully, but these errors were encountered: