-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Early Stopping behavior #1751
Comments
Hi! thanks for your contribution!, great first issue! |
I would expect that it should iterate for at least 80 epochs, too. So to me, it looks like a bug or some kind of unexpected behavior. Would be great to figure it out! |
Ok then, I'll work out some notebook to see if I can reproduce. |
Thanks @mateuszpieniak |
It is definitely a bug. I discovered that In the file
Second call:
|
I upgraded to the bleeding edge version yesterday and can confirm that this started happening to me too. I didn't have an issue before I upgraded (I think I was on 0.7.3 before?) |
Yep we ran into this as well. It is called once in trainer and once in the on epoch end callback. |
@Borda Well, I would love to make my first PL's PR if that's okay? 😉 |
@mateuszpieniak sure go ahead! 🚀 |
Hi there,
thanks for the great library (I am using 0.7.5.). I am not following the bug report template as I'm not sure this is indeed a bug, or simply I cannot understand how early stopping is implemented. My code looks as follows:
As I understand it, the model should perform early stopping after AT LEAST 80 epochs have passed without improvement on the validation accuracy. However, in my case, early stopping happened at epoch 75. Is this how it should be?
As I said, I am not sure this is actually a bug or a choice (perhaps early stopping is implemented at the batch level?). If it is indeed a bug, I will work a reproducible example. Thank you!
The text was updated successfully, but these errors were encountered: