-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
For versions >0.8.2 learning rate is zero for last epoch (potentially a logging bug) #2480
Comments
Hi! thanks for your contribution!, great first issue! |
it would be good to know whether this can be observed with the other loggers as well. Could you run your example also with TensorboardLogger? |
Hey! I believe problem lies in |
Update: When |
@HHousen You could do workaround and set
in |
@szymonzareba Yep, setting |
@HHousen mind send a PR? |
Sure |
🐛 Bug
Version 0.8.2 and above changed the behavior of either my learning rate scheduler or the
WandbLogger
logger. I am using a linear warmup and decay scheduler. However, the learning rate graph produced by theLearningRateLogger
is as shown below ever since version 0.8.2:The period where the learning rate is zero corresponds to the last epoch of training as you can see below:
This graph raises another issue. The first epoch appears to take twice as many steps as the second and third epoch. I specified
max_epochs=3
. During training, each epoch takes the same amount of time, so this seems like a logging issue.Note that the above graphs are for a model that had its training stopped early. So the last epoch is slightly shorter than the second to last. This is not the issue.
Both of these issues (the 0 learning rate and the twice-as-long epoch) do not exist in version 0.8.1, and both graphs look as they should.
These issues could be caused by the logger or they might actually occur and be logged correctly. I have looked through the changelog and I am guessing that these bugs are caused by "Changed epoch indexing from 0 instead of 1" (#2289). I also may be relying on the fact that epoch indexing started at 1 somewhere in my code, but I do not believe this to be the case.
To Reproduce
Reproducing this problem may be difficult since I can't provide the script and data I used. I used the
WandbLogger
logger andLearningRateLogger
callback. I trained with 1400 warmup steps andaccumulate_grad_batches
set to 2.I can provide additional code samples or information that you may need.
Code sample
Expected behavior
The learning rate should warmup and decay in versions greater than 0.8.2 the same way it does in versions less than 0.8.2. Each epoch should be the same number of steps.
The below graphs highlight the expected behavior. They are from a different model so they are not directly comparable, but their shape is as expected since they were captured from a model trained with
pytorch_lightning
version 0.8.1.Environment
The text was updated successfully, but these errors were encountered: