Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump coverage to 100% #1216

Closed
wants to merge 1 commit into from
Closed

Bump coverage to 100% #1216

wants to merge 1 commit into from

Conversation

saitcakmak
Copy link
Contributor

99.99% just doesn't feel right.

@facebook-github-bot facebook-github-bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label May 11, 2022
@facebook-github-bot
Copy link
Contributor

@saitcakmak has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@codecov
Copy link

codecov bot commented May 11, 2022

Codecov Report

Merging #1216 (8165eab) into main (663d4e3) will increase coverage by 0.00%.
The diff coverage is n/a.

❗ Current head 8165eab differs from pull request most recent head 9b938bf. Consider uploading reports for the commit 9b938bf to get more accurate results

@@           Coverage Diff            @@
##             main     #1216   +/-   ##
========================================
  Coverage   99.99%   100.00%           
========================================
  Files         118       118           
  Lines       10383     10383           
========================================
+ Hits        10382     10383    +1     
+ Misses          1         0    -1     
Impacted Files Coverage Δ
botorch/generation/gen.py 100.00% <0.00%> (+1.02%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 663d4e3...9b938bf. Read the comment docs.

@Balandat
Copy link
Contributor

Hmm shouldn't we instead patch the test_gen_candidates_torch test to also use a trivial test callback? That way we'd actually test the code rather than mask the gap in coverage...

@saitcakmak
Copy link
Contributor Author

Yeah, I was being lazy since it is such a trivial piece of code. Updated.

Summary:
99.99% just doesn't feel right.

Pull Request resolved: #1216

Differential Revision: D36322328

Pulled By: saitcakmak

fbshipit-source-id: 56134d3ab0dd849b0f4e0a4d700298293c6d4313
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D36322328

Copy link
Contributor

@Balandat Balandat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gtmtl

@Balandat Balandat deleted the bump_coverage branch May 28, 2022 14:52
esantorella added a commit to esantorella/botorch that referenced this pull request Oct 18, 2022
…1216)

Summary:
X-link: facebook/Ax#1216

## Motivation

Pyre is not smart enough to understand that calling `self.add_module('model', model)` makes `self.model` have the type of `model`, which is true due to some fairly complex underlying logic inherited from `torch.nn.Module`. However, PyTorch is smart enough to properly `add_module` if we just do `self.model = model`. This also works for tensors, but only if the tensor is explicitly registered as a buffer (by name, not necessarily by value) before assignment.

### Have you read the [Contributing Guidelines on pull requests]
Yes

Pull Request resolved: pytorch#1452

Test Plan:
- Unit tests should be unaffected
- Pyre error count drops from 1379 to 1309 (down 5%).
- Added explicit tests that `_modules` and `_buffers` are properly initialized

Differential Revision: D40469725

Pulled By: esantorella

fbshipit-source-id: 5b366a34e1f11b96b8ce551603f3b1a5275c5b95
facebook-github-bot pushed a commit that referenced this pull request Oct 24, 2022
Summary:
X-link: facebook/Ax#1216

## Motivation

Pyre is not smart enough to understand that calling `self.add_module('model', model)` makes `self.model` have the type of `model`, which is true due to some fairly complex underlying logic inherited from `torch.nn.Module`. However, PyTorch is smart enough to properly `add_module` if we just do `self.model = model`. This also works for tensors, but only if the tensor is explicitly registered as a buffer (by name, not necessarily by value) before assignment.

### Have you read the [Contributing Guidelines on pull requests]
Yes

Pull Request resolved: #1452

Test Plan:
- Unit tests should be unaffected
- Pyre error count drops from 1379 to 1309 (down 5%).
- Added explicit tests that `_modules` and `_buffers` are properly initialized

Reviewed By: Balandat

Differential Revision: D40469725

Pulled By: esantorella

fbshipit-source-id: 531cec5b77fc74faf478c4c96f1ceaa596ca8162
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants