-
Notifications
You must be signed in to change notification settings - Fork 415
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump coverage to 100% #1216
Bump coverage to 100% #1216
Conversation
@saitcakmak has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Codecov Report
@@ Coverage Diff @@
## main #1216 +/- ##
========================================
Coverage 99.99% 100.00%
========================================
Files 118 118
Lines 10383 10383
========================================
+ Hits 10382 10383 +1
+ Misses 1 0 -1
Continue to review full report at Codecov.
|
Hmm shouldn't we instead patch the |
Yeah, I was being lazy since it is such a trivial piece of code. Updated. |
Summary: 99.99% just doesn't feel right. Pull Request resolved: #1216 Differential Revision: D36322328 Pulled By: saitcakmak fbshipit-source-id: 56134d3ab0dd849b0f4e0a4d700298293c6d4313
This pull request was exported from Phabricator. Differential Revision: D36322328 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gtmtl
…1216) Summary: X-link: facebook/Ax#1216 ## Motivation Pyre is not smart enough to understand that calling `self.add_module('model', model)` makes `self.model` have the type of `model`, which is true due to some fairly complex underlying logic inherited from `torch.nn.Module`. However, PyTorch is smart enough to properly `add_module` if we just do `self.model = model`. This also works for tensors, but only if the tensor is explicitly registered as a buffer (by name, not necessarily by value) before assignment. ### Have you read the [Contributing Guidelines on pull requests] Yes Pull Request resolved: pytorch#1452 Test Plan: - Unit tests should be unaffected - Pyre error count drops from 1379 to 1309 (down 5%). - Added explicit tests that `_modules` and `_buffers` are properly initialized Differential Revision: D40469725 Pulled By: esantorella fbshipit-source-id: 5b366a34e1f11b96b8ce551603f3b1a5275c5b95
Summary: X-link: facebook/Ax#1216 ## Motivation Pyre is not smart enough to understand that calling `self.add_module('model', model)` makes `self.model` have the type of `model`, which is true due to some fairly complex underlying logic inherited from `torch.nn.Module`. However, PyTorch is smart enough to properly `add_module` if we just do `self.model = model`. This also works for tensors, but only if the tensor is explicitly registered as a buffer (by name, not necessarily by value) before assignment. ### Have you read the [Contributing Guidelines on pull requests] Yes Pull Request resolved: #1452 Test Plan: - Unit tests should be unaffected - Pyre error count drops from 1379 to 1309 (down 5%). - Added explicit tests that `_modules` and `_buffers` are properly initialized Reviewed By: Balandat Differential Revision: D40469725 Pulled By: esantorella fbshipit-source-id: 531cec5b77fc74faf478c4c96f1ceaa596ca8162
99.99% just doesn't feel right.