-
Notifications
You must be signed in to change notification settings - Fork 282
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feat] adding a README example for OSS #79
Conversation
pnig review, if you don't mind |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice!
# Problem statement | ||
model = myAwesomeModel() | ||
dataloader = mySuperFastDataloader() | ||
loss = myVeryRelevantLoss() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice names!
README.md
Outdated
loss.backward() | ||
return loss | ||
|
||
optimizer.step(closure) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
using closure is not the most common way for pytorch optimizers, right? Perhaps this example can be simplified by not using closure?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
depends on people, but closures is (to my understanding) considered a little safer because the scope is tight. The scope in python is leaky (like, something defined in a for loop leaks outside of it), so closures bring some sanity. It also makes it compatible with optimizers which require multiple evaluations, both options (with and without closures) are in the pytorch doc https://pytorch.org/docs/stable/optim.html#taking-an-optimization-step
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Completely agree. I was just suggesting that it is not the most common way people do it (AFAIK) (perhaps because most programmers are not very used to it). For an initial example, perhaps keep it simple is better. Of course, if you want to use this opportunity to advocate the usage of closure style, it is fine too. In that case, perhaps some comments in the code to explain what's going on with closure and why it is better?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
follow up: closure removed !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's an orthogonal subject to zero, so probably best to not mix things up. Fine by me as it is, no worries, I'll keep using closures for my own code though :)
model.zero_grad() | ||
outputs = model(batch["inputs"]) | ||
loss = loss_fn(outputs, batch["label"]) | ||
torch.distributed.all_reduce(loss, op=torch.distributed.ReduceOp.SUM) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this assuming ddp? perhaps in the simplest form, each rank was given the same batch and they don't need to reduce losses? Sorry, I didn't notice this until now. Maybe I am missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes assuming ddp, I thought that it was reasonable for zero to be useful ? for instance we torch.dist.broadcast the state shards, so ddp needs to be there. I'm actually not clear on whether it could be useful without ddp, one could probably imagine something, but my assumption was that de facto people interested would come from a ddp-enabled background ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
of course. I would love to try closure next time myself too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
our msg crossed. :-)
I think ddp is more useful with oss for sure. But without ddp it can useful to extend the model size too. (i.e. use 10 GPUs to train a model that is 5X bigger but not much speed gain in terms of sample/s.
In any case, this is find. I was just double checking.
Before submitting
What does this PR do?
Improves on #63.
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃