Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test.pytest shows more tests failed than actually failed #4736

Closed
kwlzn opened this issue Jul 10, 2017 · 6 comments · Fixed by #4747
Closed

test.pytest shows more tests failed than actually failed #4736

kwlzn opened this issue Jul 10, 2017 · 6 comments · Fixed by #4747
Assignees

Comments

@kwlzn
Copy link
Member

kwlzn commented Jul 10, 2017

seen here: https://travis-ci.org/pantsbuild/pants/jobs/252109723

the pytest summary indicates a single test failure:

 1 failed, 1036 passed, 13 skipped, 1 xfailed, 2 xpassed in 342.35 seconds 

but according to the log output, all tests in the shard report as FAILURE:

tests.python.pants_test.build_graph.target                                      .....   FAILURE
tests.python.pants_test.build_graph.build_file_parser                           .....   FAILURE
tests.python.pants_test.build_graph.build_file_aliases                          .....   FAILURE
tests.python.pants_test.build_graph.address                                     .....   FAILURE
tests.python.pants_test.build_graph.build_file_address_mapper                   .....   FAILURE
tests.python.pants_test.build_graph.build_configuration                         .....   FAILURE
tests.python.pants_test.build_graph.source_mapper                               .....   FAILURE
tests.python.pants_test.build_graph.build_graph                                 .....   FAILURE
tests.python.pants_test.build_graph.scopes                                      .....   FAILURE
tests.python.pants_test.backend.codegen.antlr.java.java                         .....   FAILURE
tests.python.pants_test.backend.jvm.subsystems.jar_dependency_management        .....   FAILURE
tests.python.pants_test.backend.jvm.subsystems.shader                           .....   FAILURE
tests.python.pants_test.backend.jvm.subsystems.custom_scala                     .....   FAILURE
tests.python.pants_test.backend.project_info.tasks.depmap                       .....   FAILURE
tests.python.pants_test.backend.project_info.tasks.dependencies                 .....   FAILURE
tests.python.pants_test.backend.project_info.tasks.filedeps                     .....   FAILURE
tests.python.pants_test.backend.project_info.tasks.ide_gen                      .....   FAILURE
tests.python.pants_test.backend.project_info.tasks.export                       .....   FAILURE
...

cc @jsirois

@kwlzn kwlzn added the python label Jul 10, 2017
@jsirois
Copy link
Contributor

jsirois commented Jul 10, 2017

A known compromise in the invalidation work. I'll think about how to make this better without stepping into results merging, which I had a strong preference for avoiding from the get go.

@jsirois jsirois self-assigned this Jul 10, 2017
@stuhood
Copy link
Member

stuhood commented Jul 10, 2017

@jsirois : If it makes things any easier: I don't think there is any need to cache anything for failing tests... I wouldn't expect a run containing a failing test to have hit the cache at all.

@jsirois
Copy link
Contributor

jsirois commented Jul 10, 2017

No cache here. This is just a run on N targets in fast mode that, w/o calculating target->test ownership, cannot say more than the whole run failed. The solution will just be making the code more complex to deal with fast vs no fast differently for display purposes.

@jsirois
Copy link
Contributor

jsirois commented Jul 14, 2017

Addressing this before finishing off caching for pytest in #4734.

@jsirois
Copy link
Contributor

jsirois commented Jul 14, 2017

OK - I think I have a decent compromise. The summary is now only output for --no-fast as was the case in the past. So:

  1. --fast, 1 partition with all targets: no summary
  2. --no-fast, 1 partition per-target: summary
  3. multiple partitions with multiple targets each: no summary

The 3rd case is not good, but we also don't actually have this partitioning case today, so TODO added.

@jsirois
Copy link
Contributor

jsirois commented Jul 14, 2017

Erm, except that means no output at all for --fast on "cached" success, ie:

...
08:50:11 00:01   [test]
08:50:11 00:01     [test-jvm-prep-command]
08:50:11 00:01       [jvm_prep_command]
08:50:11 00:01     [test-prep-command]
08:50:11 00:01     [test]
08:50:11 00:01     [pytest-prep]
08:50:11 00:01     [pytest]
08:50:12 00:02     [junit]
08:50:12 00:02     [go]
08:50:12 00:02     [node]
08:50:12 00:02   [complete]
               SUCCESS

Which looks alot like a faulty noop. May re-introduce the summary for --fast, but only on success.

jsirois added a commit to jsirois/pants that referenced this issue Jul 14, 2017
This fixes the previously confusing output for `--fast` with failures.
Revert to providing no summary output in this case, since we can't
identify the failing targets without introducing code to parse the junit
xml from the run and then associate failures to files to target owners.

The summaries are still displayed however for `--fast` success. This
provides feedback in the "cached" test success run case.

 Fixes pantsbuild#4736
jsirois added a commit that referenced this issue Jul 14, 2017
This fixes the previously confusing output for `--fast` with failures.
Revert to providing no summary output in this case, since we can't
identify the failing targets without introducing code to parse the junit
xml from the run and then associate failures to files to target owners.

The summaries are still displayed however for `--fast` success. This
provides feedback in the "cached" test success run case.

 Fixes #4736
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants