Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added training files with quantized weights #1

Merged
merged 1 commit into from
Oct 30, 2018

Conversation

MRD2F
Copy link

@MRD2F MRD2F commented Oct 29, 2018

Added training files with quantized weights

@mbluj
Copy link

mbluj commented Oct 29, 2018

Thank you. But please comment:
a) if the quantized training files are expected to replace current one,
b) if it requires changes in code other than obvious changes in configuration files and fillDescription methods,
c) Change of name of files for DPF looks incorrect, namely no v0/v1.pb structure expected by producer, 2016->2017,
d) Are other changes not related to the file quantization expected in the code?

@MRD2F
Copy link
Author

MRD2F commented Oct 29, 2018

a) if the quantized training files are expected to replace current one,

In principle yes, but first is necessary to review the performance of the new files

b) if it requires changes in code other than obvious changes in configuration files and fillDescription methods,

I plan to add version in the DPF config to remove dependency of the name.

c) Change of name of files for DPF looks incorrect, namely no v0/v1.pb structure expected by producer, 2016->2017,

2017 is a miss-type, I'll correct

d) Are other changes not related to the file quantization expected in the code?

yes, I plan to change the DeepTauID producer to reduce amount of memory consumption

@mbluj mbluj changed the base branch from master to quantizedDNN October 30, 2018 09:24
@mbluj mbluj merged commit 8da9b15 into cms-tau-pog:quantizedDNN Oct 30, 2018
@mbluj
Copy link

mbluj commented Oct 30, 2018

OK, thanks for explanations. I am going to prepare a special branch for the files with quantized training and then accept the PR.

@mbluj
Copy link

mbluj commented Oct 30, 2018

Merged to the quantizedDNN branch. I wait for updates for CMSSW code to test the new files.

@mbluj
Copy link

mbluj commented Oct 30, 2018

P.S. I checked and found that original DPF have "2017" in their names, so it makes sense to keep it also in names of quantized ones. What about "v0/v1" which is expected to be just before ".pb" to correctly recognize version of the discriminant? Is it possible to detect the DPF version based on a structure of the graph in *.pb file?

@kandrosov
Copy link

@mbluj @ocolegrove After some tries, I don't find a way to create quantized files for DPF that would work for 94X, because 94X uses an old tensorflow version (1.3.0). In 10_4_X quantized files work fine, because the tensorflow version there is 1.6.0.
The quantized version of deepTau will work for both 94X and 10_4_X, because it has a simpler network structure.

Any ideas/suggestions how to proceed?

@mbluj
Copy link

mbluj commented Oct 31, 2018 via email

@kandrosov
Copy link

What about TF version associated to 102X?

1.6.0, so it should be compatible.

I propose to focus on 104X for now (and 102X if it contains TF version
which is new enough). If DeepTau/DPFTau are accepted for 104X we will think
about backports. Does it sound reasonable?

Ok, I agree. However, to not create a mess, I propose that @MRD2F first commits the code in 94X branch (with quantized version disabled by default) and then forward port it to 10_4_X (same procedure as was done before). Are you agree?

Just for the record: since the original version of DPF is compatible with 1.3.0, in theory it should be possible to produce quantized graph that is also compatible with this version by using a standalone tensorflow v1.3.0 installed from sources with transform_graph tools compiled.

@mbluj
Copy link

mbluj commented Oct 31, 2018 via email

@kandrosov
Copy link

OK, it is good to know. If integration with CMSSW 104X is successful we will think about this option for 94X. I hope that graph quantized with TF 1.3.0 will be also usable with 1.6.0 to not expand too much the training-file repo. Or you think it makes sense to try the quantization with TF 1.3.0 already now? Will it be more handy?

For sure it would be handier to have the final quantized files straight away compatible with 1.3.0 and above, but tensorflow 1.3.0 is old and it is not compatible with the recent versions of bazel (the build system to compile tensorflow from sources). So, at least if someone has it already installed, one should do all setup from scratch to make it work, which will require some time...

@mbluj
Copy link

mbluj commented Oct 31, 2018 via email

mbluj pushed a commit that referenced this pull request Oct 13, 2021
Training files for DNN-based Tau-Ids
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants