Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't assume runners have more than 2 cores available for benchmarking #5827

Merged
merged 1 commit into from
Feb 22, 2022

Conversation

DanielNoord
Copy link
Collaborator

@DanielNoord DanielNoord commented Feb 22, 2022

  • Add yourself to CONTRIBUTORS if you are a new contributor.
  • Write a good description on what the PR does.

Type of Changes

Type
🔨 Refactoring

Description

Based on testing in DanielNoord#87.
I missed these in #5816 as tests were not failing on these initially, but know they are. It is logical as they make the same incorrect assumption about test runners as the others were.

I do wonder why we even have these tests. It's not as if we're doing anything with them 😅

@DanielNoord DanielNoord added the Maintenance Discussion or action around maintaining pylint or the dev workflow label Feb 22, 2022
@DanielNoord DanielNoord added this to the 2.13.0 milestone Feb 22, 2022
@coveralls
Copy link

Pull Request Test Coverage Report for Build 1880678613

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 93.994%

Totals Coverage Status
Change from base Build 1878236493: 0.0%
Covered Lines: 14931
Relevant Lines: 15885

💛 - Coveralls

@Pierre-Sassoulas
Copy link
Member

I do wonder why we even have these tests. It's not as if we're doing anything with them 😅

Yes, it's just a bunch of wasted runner time right now. They were added to follow performance at some point but we're not using the benchmarks right now.

Copy link
Member

@Pierre-Sassoulas Pierre-Sassoulas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change itself looks good. Those benchmark were introduced in one of those : https://github.com/PyCQA/pylint/pulls?q=is%3Apr+is%3Aclosed+author%3Adoublethefish, probably #3473

@DanielNoord
Copy link
Collaborator Author

Yes, it's just a bunch of wasted runner time right now. They were added to follow performance at some point but we're not using the benchmarks right now.

Change itself looks good. Those benchmark were introduced in one of those : https://github.com/PyCQA/pylint/pulls?q=is%3Apr+is%3Aclosed+author%3Adoublethefish, probably #3473

Looking at that PR I don't even understand why this is part of the test suite in the first place. They seem like instructions to generate benchmark data, but that should probably be included in the development guidelines and as an additional script. If I understand their purpose correctly there is (currently) no need to ever run them in CI as they are never getting compared to something.

@Pierre-Sassoulas
Copy link
Member

If I understand their purpose correctly there is (currently) no need to ever run them in CI as they are never getting compared to something.

Yes, they could, potentially, but this is not what is happening right now.

@DanielNoord DanielNoord merged commit 8dbb2ea into pylint-dev:main Feb 22, 2022
@DanielNoord DanielNoord deleted the fix-tests branch February 22, 2022 15:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Maintenance Discussion or action around maintaining pylint or the dev workflow
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants