Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feat] adding a README example for OSS #79

Merged
merged 4 commits into from
Sep 14, 2020
Merged

[feat] adding a README example for OSS #79

merged 4 commits into from
Sep 14, 2020

Conversation

blefaudeux
Copy link
Contributor

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
  • Did you read the contributor guideline?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests?

What does this PR do?

Improves on #63.

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 10, 2020
@blefaudeux
Copy link
Contributor Author

pnig review, if you don't mind

Copy link
Contributor

@min-xu-ai min-xu-ai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice!

# Problem statement
model = myAwesomeModel()
dataloader = mySuperFastDataloader()
loss = myVeryRelevantLoss()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice names!

README.md Outdated
loss.backward()
return loss

optimizer.step(closure)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

using closure is not the most common way for pytorch optimizers, right? Perhaps this example can be simplified by not using closure?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

depends on people, but closures is (to my understanding) considered a little safer because the scope is tight. The scope in python is leaky (like, something defined in a for loop leaks outside of it), so closures bring some sanity. It also makes it compatible with optimizers which require multiple evaluations, both options (with and without closures) are in the pytorch doc https://pytorch.org/docs/stable/optim.html#taking-an-optimization-step

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Completely agree. I was just suggesting that it is not the most common way people do it (AFAIK) (perhaps because most programmers are not very used to it). For an initial example, perhaps keep it simple is better. Of course, if you want to use this opportunity to advocate the usage of closure style, it is fine too. In that case, perhaps some comments in the code to explain what's going on with closure and why it is better?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

follow up: closure removed !

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's an orthogonal subject to zero, so probably best to not mix things up. Fine by me as it is, no worries, I'll keep using closures for my own code though :)

@blefaudeux blefaudeux merged commit 6851247 into master Sep 14, 2020
@blefaudeux blefaudeux deleted the oss_doc branch September 14, 2020 18:37
model.zero_grad()
outputs = model(batch["inputs"])
loss = loss_fn(outputs, batch["label"])
torch.distributed.all_reduce(loss, op=torch.distributed.ReduceOp.SUM)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this assuming ddp? perhaps in the simplest form, each rank was given the same batch and they don't need to reduce losses? Sorry, I didn't notice this until now. Maybe I am missing something?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes assuming ddp, I thought that it was reasonable for zero to be useful ? for instance we torch.dist.broadcast the state shards, so ddp needs to be there. I'm actually not clear on whether it could be useful without ddp, one could probably imagine something, but my assumption was that de facto people interested would come from a ddp-enabled background ?

Copy link
Contributor

@min-xu-ai min-xu-ai Sep 14, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

of course. I would love to try closure next time myself too.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

our msg crossed. :-)

I think ddp is more useful with oss for sure. But without ddp it can useful to extend the model size too. (i.e. use 10 GPUs to train a model that is 5X bigger but not much speed gain in terms of sample/s.

In any case, this is find. I was just double checking.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants