Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[code-infra] Skip tests of files that didn't change #171

Open
oliviertassinari opened this issue Jun 25, 2024 · 4 comments
Open

[code-infra] Skip tests of files that didn't change #171

oliviertassinari opened this issue Jun 25, 2024 · 4 comments
Assignees
Labels
dx Related to developers' experience performance test

Comments

@oliviertassinari
Copy link
Member

oliviertassinari commented Jun 25, 2024

Summary

We ran all the tests in the mono repositories with Mocha, even if the source didn't change. It would be great to skip all the tests in which sources were untouched.

Examples

No response

Motivation

Solving #170 should much likely be done first since skipping test can be unreliable,

@oliviertassinari oliviertassinari added status: waiting for maintainer These issues haven't been looked at yet by a maintainer test labels Jun 25, 2024
@Janpot Janpot changed the title [code-infra] Skill tests of files that didn't change [code-infra] Skip tests of files that didn't change Sep 11, 2024
@Janpot
Copy link
Member

Janpot commented Sep 11, 2024

Vitest supports running tests on changed files only.

Personally I'm against this, CI only passes when it fully passes. I haven't seen an implementation yet that didn't eventually evolve into ignoring more tests than intended.

@Janpot Janpot removed the status: waiting for maintainer These issues haven't been looked at yet by a maintainer label Sep 11, 2024
@oliviertassinari
Copy link
Member Author

oliviertassinari commented Sep 11, 2024

@Janpot Ticky, I guess we could do stuff like https://github.com/mui/material-ui/blob/94cc3d0a66456713e6e25edf9a1ba35ad8df88bc/.circleci/config.yml#L191 but it won't really solve the problem.

Ideally, we would be able to control the dependencies that invalid tests and be pretty wide in what we consider invalidating https://vitest.dev/config/#forcereruntriggers.

I haven't seen an implementation yet that didn't eventually evolve into ignoring more tests than intended.

Vitest says they use module graph dependency, so maybe it's OK?

@Janpot
Copy link
Member

Janpot commented Sep 12, 2024

Vitest says they use module graph dependency, so maybe it's OK?

Yep, I meant "all solutions I saw before worked by manually replicating the module graph in CI" and that is not just a matter of if, but when does it go out of sync with the real module graph. The biggest problem with this is that it's almost never the person that caused it to go out of sync to notice the problem but an unsuspecting victim a few weeks later that finds some tests failing for no reason linked to the changes they are making. And since the failing test can't be connected to any known change it's also fully up to that person to debug it until the root cause is found.

I'm less hesitant to try a solution that understands the module graph as created by pnpm workspaces. But even that will fail when maintainers just freely ignore the package boundary for convenience (not blaming them, just demonstrating that it's easy to break the assumptions). Similarly, I suspect some of the aliasing we do also creates hidden dependencies.

@oliviertassinari
Copy link
Member Author

oliviertassinari commented Sep 12, 2024

The biggest problem with this is that it's almost never the person that caused it to go out of sync to notice the problem but an unsuspecting victim a few weeks later that finds some tests failing for no reason linked to the changes they are making. And since the failing test can't be connected to any known change it's also fully up to that person to debug it until the root cause is found.

The beginning for a horror movie 😄.

I could see a way out, but only works with step 1:

  1. Get the team to obsess over the default branch CI state
  2. Run all the test suite on master
  3. Only run diff in PRs

@oliviertassinari oliviertassinari added performance dx Related to developers' experience labels Sep 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dx Related to developers' experience performance test
Projects
None yet
Development

No branches or pull requests

2 participants