-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Full callback handling #556
Comments
in lightning you do whatever in any hook you want. https://pytorch-lightning.readthedocs.io/en/latest/Trainer/hooks/ |
Indeed, I missed that one. I guess I would add a |
those are the ways! but if we need more hooks, feel free to submit a PR! |
Thanks, I'll look into it ! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Is your feature request related to a problem? Please describe.
I started deep learning using fastAI, and despite all its drawbacks there is one thing I found very handy when I wanted to tweek the training loop : callbacks. This is far less important here as we have control over
training_step
, but I was for instance wondering how I could do something when train begins (like log my model's graph). Maybe I'm missing something but it seems to me that there is no simple way to do that.The fact that there is a
Callback
class defined (used for early stopping and checkpointing) makes me think that it was somewhat planned at some point, but was never fully integrated (or I may be missing something).Describe the solution you'd like
Implement callback handling with a few methods (the one defined in
Callback
) that can be called at specific points during the training loop. Early stopping and checkpointing can be added as default callbacks and integrated within this new framework. The idea is to have something simple to interact with the training loop at specific points.Describe alternatives you've considered
Let things as they are and just add the callback directly in
pl.LightningModule
as optional methods to be implemented. They then just need to be called at the right time during the training loop. It doesn't change much compared to traditional callbacks but it may be closer to the design of pytorch-lightning.If you find that there is any value in this idea I could try and write a PR in the following weeks.
The text was updated successfully, but these errors were encountered: