Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

master-port 51846 and 52969 #54992

Merged
merged 16 commits into from
Dec 21, 2019
Merged

master-port 51846 and 52969 #54992

merged 16 commits into from
Dec 21, 2019

Conversation

mchugh19
Copy link
Contributor

Ports #51846 and #52969 to master.

Likely needed to be merged before #53307 can be rebased.

@gtmanfred @max-arnold

dwoz and others added 2 commits October 13, 2019 22:12
Allow using salt modules in onlyif and unless
@Akm0d Akm0d requested a review from a team as a code owner November 11, 2019 17:15
@ghost ghost requested a review from Ch3LL November 11, 2019 17:15
@Akm0d
Copy link
Contributor

Akm0d commented Nov 11, 2019

unit.test_state.StateCompilerTestCase.test_render_requisite_require_disabled and unit.test_state.StateCompilerTestCase.test_render_requisite_require_in_disabled need to be passing before this can be merged

  File "/tmp/kitchen/testing/tests/unit/test_state.py", line 102, in test_render_requisite_require_disabled
    self.assertEqual(run_num, 0)
AssertionError: 1 != 0

Traceback (most recent call last):
  File "/tmp/kitchen/testing/tests/unit/test_state.py", line 130, in test_render_requisite_require_in_disabled
    self.assertEqual(run_num, 0)
AssertionError: 1 != 0```

@mchugh19
Copy link
Contributor Author

@Akm0d I think I might need some help here. At first glance the failing tests don't seem to be calling anything in the modified code. I added a few log lines to be sure, and I don't see the output. If all went well, the only functions modified in state.py are _run_check_unless and _run_check_onlyif and the failing tests don't use those modules.

Is it possible there is something else impacting those tests?

@Akm0d
Copy link
Contributor

Akm0d commented Nov 12, 2019

@mchugh19 The failing tests were added to the test suite with this PR. If they don't touch the actual code that was changed then maybe they can be removed? Debugging it yesterday I found that 'step_one' and 'step_two' in high_data caused __run_num__ to be incremented twice, which seems to be the correct behavior after stepping through with the debugger. The tests will pass consistently if self.assertEqual(run_num, 0) is changed to self.assertEqual(run_num, 1) which again, seems like the correct assertion to make, but I don't see how it tests what the test description says it is testing...

@mchugh19
Copy link
Contributor Author

@Akm0d Interesting! Thanks for that! Those tests were not in the original PRs, so I didn't notice those tests in there. I don't want to over complicate things, so I've removed them here. (Looks like they were a left over from @garethgreenaway in #49955)

Thanks much!

@dwoz
Copy link
Contributor

dwoz commented Dec 2, 2019

@mchugh19 Can you please re-base this on top of the current master?

@mchugh19
Copy link
Contributor Author

mchugh19 commented Dec 2, 2019

Bam! Done!

Copy link
Contributor

@dwoz dwoz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mchugh19 Looks like the test failure are related, please have a look.

@mchugh19
Copy link
Contributor Author

@dwoz tests corrected to not rely on local files (and be Debian specific)

@dwoz
Copy link
Contributor

dwoz commented Dec 21, 2019

The windows tests passed but an issue in the build is causing it to go red, we are working on resolving that issue for windows.

OK (total=9298, skipped=1212, passed=8086, failures=0, errors=0)

Merging

@dwoz dwoz merged commit 1f774c7 into saltstack:master Dec 21, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants