Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Summary of debugging WarpCTCLayer #3076

Closed
pkuyym opened this issue Jul 26, 2017 · 4 comments
Closed

Summary of debugging WarpCTCLayer #3076

pkuyym opened this issue Jul 26, 2017 · 4 comments
Assignees

Comments

@pkuyym
Copy link
Contributor

pkuyym commented Jul 26, 2017

I often encounter inf cost when training mandanrin data using deep speech2 (GPU version). It seems that WarpCTCLayer may have potential numerical problems. So, @qingqing01 and I have been tried to figure out what leads to inf. Considering that inf doesn't appear regularly, we save the two inputs of WarpCTCLayer using printValueString firstly, then parse and load the saved context in debugging phase. However, loading the exception context only increases probability of inf which means that regular reproduction is not assured.

For inf, we find two suspicious snippets

seq2batchPadding

Please go to seq2batchPadding to see details. We detect -inf in batchValue_ just before calling hl_warpctc_compute_loss, since that seq2batchPadding is the only function in which batchValue is modified except resizeOrCreate. So we consider seq2batchPadding as a suspicious reason.

status: Fixed by #3105

compute_probs_kernel

We also dig into wrap ctc kernal and find that compute_probs_kernel will appear 0 after exponent operation. Location of exponent operation snnipet is at ctc_helper::exponential, this leads to 0 contained in probs_. Unfortunately, probs_ will be passed into compute_alpha_kernel and illegal operation log(0) is detected at line167 and line191 in compute_alpha_kernel. We also consider this as a suspicious reason.

status: Fixed by #5

Besides, we also encounter a validataion error, details are listed below:

F0726 17:18:41.516580 15765 hl_warpctc_wrap.cc:131] Check failed: CTC_STATUS_SUCCESS == dynload::compute_ctc_loss(batchInput, batchGrad, cpuLabels, cpuLabelLengths, cpuInputLeng ths, numClasses, numSequences, cpuCosts, workspace, *options) (0 vs. 3) warp-ctc [version 2] Error: execution failed-

This fatal exception is throwed by here. The reason hasn't been figured out.

@qingqing01 qingqing01 self-assigned this Jul 26, 2017
@sancha
Copy link

sancha commented Jul 26, 2017

hi team, it seems like you guys have a pretty good idea on what could be happening. I will just add my 2c on invitation from @wangkuiyi

i) if the cpu version doesn't throw up similar errors, then its almost definitely a bug that needs to be fixed.
ii) If not, then its a modeling issue. Probabilities will almost never be hard 0 during training, while we could clean it by adding a small epsilon, its likely that the model you are training is already broken. You want to try doing more gradient clipping, lower learning rate, sortagrad etc., as tricks to not get into these scenarios.
iii) It is also useful to do a debug forward-prop through the entire model to see where the infs / nans first show up. Often warp-ctc just reports these things, but other layers forward prop silently.
Hope it helps!

@pkuyym
Copy link
Contributor Author

pkuyym commented Jul 27, 2017

@sancha Very thanks for your helpful analysis and suggestion. It's interesting that inf problem only appears in mandarin data, Warp CTC works well when we training english data. To make it more clear, I would make more explanation based on your proposals.

i) In fact, illegal log(0) also occurs in cpu version and we add a small epsilon in here to avoid interruption.
ii) We have tried several tricks to avoid inf including gradient clipping, lower learning rate, sortagrad, and smaller/larger batch size etc., but nothing works.
iii) We stored the input context for Warp CTC when inf interrupt the training process and find no abnormal values like inf or nan in the context, then we load the input context to reproduce the interruption, sometimes inf loss still appears (not always). So we consider Warp CTC as a suspicious reason to lead to inf.

@sancha
Copy link

sancha commented Jul 27, 2017

Awesome, you have all your bases covered 👍 Happy to help if you need anything from me in the future.

@wanghaoshuang
Copy link
Contributor

Closing this issue due to inactivity, feel free to reopen it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants