-
Notifications
You must be signed in to change notification settings - Fork 361
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix unit tests to run consistently in CI and locally #815
Conversation
6ca381e
to
a37d474
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haven't looked into why the tests are failing, but I'm on board with all of these changes. Love the s/tmpdir/tmp_path/
part.
recheck That SKIPPED zuul job is weird. |
recheck |
There are some test failures I need to work out. I can't run the integration tests locally, so I just have to kick them up and see what fails. 🤞 |
I think I've fixed the test failures related to switch to |
Latest failure is because |
These tests make me sad. :( |
You can do it! ansible-runner needed a hero! |
Checking out your branch and running locally, here's the failure I get (for a lot of tests): https://gist.github.com/AlanCoding/15377a3099068506a6bfdb05b9b740d4 So there's still some brittleness somehow. It confusingly gives the help text. Looking into the code path, I printed out
And it turns out that ansible-runner/ansible_runner/__main__.py Lines 757 to 760 in 77f15a4
Tests are ran with the pytest sys.argv (which are different for the parallel tests), and not the intended The only coherent way I can see out of this is diff --git a/ansible_runner/__main__.py b/ansible_runner/__main__.py
index d6222a1..fe2640c 100644
--- a/ansible_runner/__main__.py
+++ b/ansible_runner/__main__.py
@@ -754,7 +754,7 @@ def main(sys_args=None):
add_args_to_parser(isalive_container_group, DEFAULT_CLI_ARGS['container_group'])
add_args_to_parser(transmit_container_group, DEFAULT_CLI_ARGS['container_group'])
- if len(sys.argv) == 1:
+ if sys_args is None:
parser.print_usage()
print_common_usage()
parser.exit(status=0) I don't know why Zuul was not hitting the same failures as me, but maybe it's because all tests run on |
Yup, I ran into this reliance on |
It does seem to me that the tests are passing by accident currently. Not a great state of affairs. |
I find the code's use of |
rc._prepare_env() | ||
rc._handle_command_wrap(rc.execution_mode, rc.cmdline_args) | ||
def test_containerization_settings(tmp_path, container_runtime, mocker): | ||
mocker.patch.dict('os.environ', {'HOME': str(tmp_path)}, clear=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you considered using the built-in monkeypatch.setenv()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't yet figured out when to use monkeypatch
vs mocker.patch
. I still have things to learn. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
out of the 3 options outlined here:
pytest-dev/pytest#4576 (comment)
I have seen a lot converging on option (2), because python2 tends to be in the rear view mirror by now. The pattern from unittest import mock
already exists in ansible-runner unit tests. However, the pytest-mock you're adding here is more naturally aware of the pytest test scope, which means you don't have to indent everything in context managers. Thus, I raise no objections to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pattern from unittest import mock already exists in ansible-runner unit tests.
I have a stashed commit locally that removes all of that in favor of the pytest-mock
plugin. It ended up being pretty huge and took me several hours to get working which is why I didn't include those changes in this PR. I'll do that in (yet another) PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had a good talk with @webknjaz about monkeypach
vs mocker.patch
and I have a better idea of when to use each now. monkeypatch
doesn't have the ability to clear the environment, but I'm not sure that is strictly necessary. I will try it out see if it works.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically, monkeypatch.setattr(os, 'environ', {'HOME': str(tmp_path)})
would probably emulate this. But I agree, it's usually unnecessary to clear all the other unnecessary env vars. If you expect certain env vars not to be set, use monkeypatch.delenv()
for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
out of the 3 options outlined here:
I think that what Ronny points out in the very next comment is even more important to take into account: #815 (comment).
TL;DR If you end up using any sort of mocking technique, you probably have an architectural problem. If you were to just use dependency injection, there would be exactly zero need for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you end up using any sort of mocking technique, you probably have an architectural problem. If you were to just use dependency injection, there would be exactly zero need for this.
I absolutely agree with this. I am huge fan of test driven development for this reason, among others. Currently, I'm just trying to improve the state of the tests. Eventually I hope to improve the code itself.
The
|
Oh, if you only new what hacks CPython does with |
FWIW I believe it's wrong to rely on argv in tests. One should fall an entrypoint function where it's passed in as an arg. |
Yes, I only highlighted the |
This is a bug in ansible-runner (and its tests), though. |
I still believe this diff would fix the bug, sorry if I haven't communicated very clearly. |
@AlanCoding Unfortunately, it doesn't. I spent all day yesterday and all morning on a fix. It's quite a bit more complicated than I had hoped, though I'm hoping we can go with a much simpler fix than what I came up with in order to keep the existing behavior. |
pathlib.Path.mkdir() returns None, not a Path object.
Also make sure to write temp files to tmp_path and not in the project root.
This reverts commit 15be299.
A lot of these tests are brittle because they are writing files to the project directory and checking for the existence of files in the same directory. Using a different directory for each test makes this more resilient.
ff46f7a
to
041fa40
Compare
Ho-lee-cow. It passed. 🙌 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Do not rely on length of sys.argv From the current behavior, it seems the goal is to print a common usage message and exit 0 if no sub command was given. Under other circumstances, such as an invalid or missing parameter, exit with 2 and print the normal usage message. In order to preserve this behavior and make the tests run consistently whether or not they are run in multiple process, a custom error method was created that inspects the error message and exits 0 with the common usage message if no command was given. This is a bit fragile, but inside that method we do not have easy access to arguments passed to the main() function (they could be inferred by inspecting private attributes, but this is probably unwise). This requires catching the SystemExit in the main() function and deciding there based on sys_args how to handle it. This fix preservers the existing behavior whil making the tests pass consistently. However, it would be much simpler to rely on the default behavior of argparse and exit 2 in both cases. This would remove the need for a custom error method and exception handling. It would be a change in behavior, though. Related to #815. Reviewed-by: Alan Rominger <arominge@redhat.com> Reviewed-by: Sam Doran <sdoran@redhat.com> Reviewed-by: David Shrewsbury <None> Reviewed-by: None <None>
Do not rely on length of sys.argv From the current behavior, it seems the goal is to print a common usage message and exit 0 if no sub command was given. Under other circumstances, such as an invalid or missing parameter, exit with 2 and print the normal usage message. In order to preserve this behavior and make the tests run consistently whether or not they are run in multiple process, a custom error method was created that inspects the error message and exits 0 with the common usage message if no command was given. This is a bit fragile, but inside that method we do not have easy access to arguments passed to the main() function (they could be inferred by inspecting private attributes, but this is probably unwise). This requires catching the SystemExit in the main() function and deciding there based on sys_args how to handle it. This fix preservers the existing behavior whil making the tests pass consistently. However, it would be much simpler to rely on the default behavior of argparse and exit 2 in both cases. This would remove the need for a custom error method and exception handling. It would be a change in behavior, though. Related to #815. Reviewed-by: Alan Rominger <arominge@redhat.com> Reviewed-by: Sam Doran <sdoran@redhat.com> Reviewed-by: David Shrewsbury <None> Reviewed-by: None <None> (cherry picked from commit 4f67a3a)
…4f67a3a9e2f20e5d4da313958881eeef67397011/pr-825 [PR #825/4f67a3a9 backport][release_2.0] Do not rely on length of sys.argv This is a backport of PR #825 as merged into devel (4f67a3a). From the current behavior, it seems the goal is to print a common usage message and exit 0 if no sub command was given. Under other circumstances, such as an invalid or missing parameter, exit with 2 and print the normal usage message. In order to preserve this behavior and make the tests run consistently whether or not they are run in multiple process, a custom error method was created that inspects the error message and exits 0 with the common usage message if no command was given. This is a bit fragile, but inside that method we do not have easy access to arguments passed to the main() function (they could be inferred by inspecting private attributes, but this is probably unwise). This requires catching the SystemExit in the main() function and deciding there based on sys_args how to handle it. This fix preservers the existing behavior whil making the tests pass consistently. However, it would be much simpler to rely on the default behavior of argparse and exit 2 in both cases. This would remove the need for a custom error method and exception handling. It would be a change in behavior, though. Related to #815. Reviewed-by: David Shrewsbury <None> Reviewed-by: None <None>
Backport to release_2.0: 💔 cherry-picking failed — conflicts found❌ Failed to cleanly apply 2323686 on top of patchback/backports/release_2.0/2323686deac3a0e6691b7bd63c156f8422529746/pr-815 Backporting merged PR #815 into devel
🤖 @patchback |
Fix unit tests to run consistently in CI and locally use pytest-mock and tmp_path fixtures mock test environment variables to prevent inconsistent test results reformat test fixtures so they are easier to read This will resolve the test failures in ansible#805. Reviewed-by: Sviatoslav Sydorenko <webknjaz+github/profile@redhat.com> Reviewed-by: Sam Doran <sdoran@redhat.com> Reviewed-by: Alan Rominger <arominge@redhat.com> Reviewed-by: David Shrewsbury <None> Reviewed-by: None <None> (cherry picked from commit 2323686)
Use pytest-mock and tmp_path fixtures Mock test environment variables to prevent inconsistent test results Reformat test fixtures so they are easier to read
tmp_path
fixturesThis will resolve the test failures in #805.