-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
forward-porting DNN-related from branch cms-tau-pog:CMSSW_9_4_X_tau_pog_DNNTauIDs #107
forward-porting DNN-related from branch cms-tau-pog:CMSSW_9_4_X_tau_pog_DNNTauIDs #107
Conversation
…es quantized - Added a new parameter 'version' on runTauIdMVA, used on DPFIsolation - Changes on DeepTauId to reduce memory consumption
…read and reduce the memory consuption - Creation of class DeepTauCache in DeepTauBase, in which now is created graph and session - Implementation of two new static methods inside the class DeepTauBase: initializeGlobalCache and globalEndJob. The graph and DeepTauCache object are created now inside initializeGlobalCache
TauWPThreshold class parses WP cut string (or value) provided in the python configuration. It is needed because the use of the standard StringObjectFunction class to parse complex expression results in an extensive memory usage (> 100 MB per expression).
- Implementation of global cache to avoid reloading graph for each thread - Creation of two new static methods inside the class DeepTauBase: initializeGlobalCache and globalEndJob. The graph and DeepTauCache object are created now inside initializeGlobalCache. The memory consumption of initializeGlobalCache for the original, quantized and files that are load using memory mapping method are in the memory_usage.pdf file - Implemented configuration to use new training files quantized, and set them as default - Implementation of configuration for load files using memory mapping. In our case there wasn't any improvement, respect at the memory consumption of this method, respect the quantized files, so this is not used, but set for future training files - General code review and cleaning.
…ad of the quantized
One comment I forgot to mention before, the sample that I used for the test was: with globaltag: 103X_upgrade2018_realistic_v8, and runned with 100 events. Also the files with more information for DeepTau, DPFv0 and DeepTau+DPFv0 I'm putting them here. igprof-analyse-DeepTau_quantized.txt.gz With the same configuration than before, I also preformed CPU use test, with the original and quantized files, for: DeepTau, DPFv0 and DeepTau with DPFv0 . The file with the summarized information about the CPU time of the method ' deep_tau::DeepTauBase::produce' is in here DeepTauCPU.txt.gz |
Thank you! I'll take a look today or a few next days. |
I preformed the CPU use tests again, but this time with 1000 events, to see if the difference in time of the original files and the quantized decreased, and indeed happen, as it can be seen in here: The sample I used for this test was: /store/mc/RunIIFall18MiniAOD/TTToHadronic_TuneCP5_13TeV-powheg-pythia8/MINIAODSIM/102X_upgrade2018_realistic_v12-v1/100000/BFA43300-C42C-8442-91B7-23DAD4599D00.root The files with the complete information, as it was said in the comment before, are these: DeepTauCPU.txt.gz |
Tested, results as with 94X version. |
forward-porting DNN-related from branch cms-tau-pog:CMSSW_9_4_X_tau_pog_DNNTauIDs
Changes regarding forward-porting DNN-related developments from the PRs #105 and #106 from 94X to 104X
Memory tests were performed and they're attached here. The results are consistent with the ones obtained with the 94X release ( #105) .
memory_usage_.pdf