-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Low code coverage with Codecov.io - tests needed #137
Comments
I am going to take a look on display.py 24 -27. @GatorEducator/team-lancaster |
I don't think we should work too much on coverage for spreadsheet.py as I am unsure of how different the file will be once we start using CSV files instead of Google Sheets integration. |
I will look into parse_arguments.py lines 94, 103 and 116-128. @GatorEducator/team-lancaster |
Okay, I will take a look at read_student_file.py 15-29. |
Hello @Lancasterwu, thank you for adding me to this issue. I will take a look at group_random.py 46-69. |
Alright @everitt-andrew and I will look into group_rrobin.py 54-87. @GatorEducator/team-lancaster |
In case anybody is having difficulties locating their test, those for group_random, group_rrobin, and gatorgrouper are found in the test_group_method.py file. |
The line 24-27 is the function which provides the welcome message:
I think it is unnessesary to report in the coverage test. However, I put the following code to get the 100% coverage rate in
There is another error in @huangs1's repository. When I cloned it and try to run the coverage test locally, the |
@Lancasterwu If I'm reading the build log correctly, it looks like the pylint error is:
I believe it's just a warning that since there's only ever one instance of the None object, but objects which are not None can still equal None, it is best to check that the expressions are identical in test cases:
|
@lancaster let's meet to discuss the issue |
@ilikerustoo Sure. |
@Lancasterwu In case your team doesn't see this, please tell your team. I just spoke with our project manager @aubreypc and I was told that we should split up the team like this: one group of hypothesis testers, one group of regular testers, and one group that reviews the test cases. This way everyone knows what everyone else is doing. What are your thoughts on this? |
Currently looking into gatorgrouper.py lines 25-82 |
I am currently working on group_method with hypothesis tests. |
Currently working on test_group_size with Hypothesis tests. |
Working with Matt on gatorgrouper.py lines 25-82 |
Added new test to cover lines 54-87 as indicated in [issue 137](#137).
Right now using
CodeCov
and the following commandpipenv run pytest -x -s --cov-config pytest.cov --cov-report term-missing --cov
we are getting a total coverage of 58%. Previously we had low coverage due the incorrect configuration ofcoverall
that checked coverage on the tests themselves this providing a falsely high code coverage.We need more test cases that cover the rest of the code and this can be achieved by simply making tests for the missing statements/line below:
We must aim for at least 80% coverage.
The text was updated successfully, but these errors were encountered: