-
Notifications
You must be signed in to change notification settings - Fork 908
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
integration_tests: bump pycloudlib dependency #846
Conversation
The latest pycloudlib now launches official Ubuntu cloud images for xenial, meaning that `lxc exec` no longer works against them. This commit includes handling for tests which are affected by this change; further details and reasoning in the included comment.
# normal LXD test runs). This is not true of this test: it | ||
# can't run in our usual xenial LXD VM test run, and it may not | ||
# run anywhere else. A failure flags up this discrepancy. | ||
pytest.fail(XENIAL_LXD_VM_EXEC_MSG) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could go both ways on this. You make a good argument, but on the other hand, is this that much different from what we're doing for other tests? We skip on unsupported configurations. The only way we know a test is going to get run is if we make sure that it's included in the configuration matrix of tests we're going to run. If we know we have tests that require exec, we should probably include an exec-able image to our default test runs.
But I'm ok going with this. If it causes frustration, it wouldn't be hard to change later.
On Thu, Mar 18, 2021 at 09:55:22AM -0700, James Falcon wrote:
> @@ -216,6 +227,16 @@ def _client(request, fixture_utils, session_cloud: IntegrationCloud):
if lxd_use_exec is not None:
if not isinstance(session_cloud, _LxdIntegrationCloud):
pytest.skip("lxd_use_exec requires LXD")
+ if isinstance(session_cloud, LxdVmCloud):
+ image_spec = ImageSpecification.from_os_image()
+ if image_spec.release == image_spec.image_id == "xenial":
+ # Why fail instead of skip? We expect that skipped tests will
+ # be run in a different one of our usual battery of test runs
+ # (e.g. LXD-only tests are skipped on EC2 but will run in our
+ # normal LXD test runs). This is not true of this test: it
+ # can't run in our usual xenial LXD VM test run, and it may not
+ # run anywhere else. A failure flags up this discrepancy.
+ pytest.fail(XENIAL_LXD_VM_EXEC_MSG)
I could go both ways on this. You make a good argument, but on the
other hand, is this that much different from what we're doing for
other tests? We skip on unsupported configurations.
Yeah, I'm struggling to find the language to describe what I mean. I
think we have a set of configurations that the test framework as a whole
supports, which can be described along two axes: platform and Ubuntu
release[0]. Currently, when we skip a test, we are expressing that this
test does not apply because of where we are on one of those axes. In
this situation, however, we _want_ to run these tests on xenial LXD VMs;
we _want_ to run these tests due to where we are on those axes. The
reason that we _can't_ is because we need a different OS_IMAGE for the
same Ubuntu release. As this is not one of the aforementioned axes, we
behave differently to draw attention to this.
Put another way: this raises an error because if we were to naïvely
execute tests across the full Cartesian product of (Ubuntu release x
platform), this test would never run.
[0] Really I think we have three axes: platform, OS, and release; OS is
always Ubuntu currently, so I don't think that's material: we would
still expect to use the same OS_IMAGE for all launches of RHEL 8 on
EC2, for example.
(Using the above language, I think my "image catalogue" proposal in the
description could potentially be thought of as moving OS_IMAGE from an
external axis (that we have to specifically configure coverage of) to
one that the test framework itself controls.)
The only way we know a test is going to get run is if we make sure
that it's included in the configuration matrix of tests we're going to
run. If we know we have tests that require exec, we should probably
include an exec-able image to our default test runs.
Right, we should definitely do this: if we configure them correctly
(i.e. select/deselect lxc_use_exec appropriately), then we should never
see this error. The idea here is that if we _don't_ configure things
properly, then we will actively hit an error, rather than silently never
running the tests.
|
That explanation makes sense, and I agree. Thanks for taking the time to put that thinking into words. |
Proposed Commit Message
Additional Context
This change will affect our SRU process for xenial. We'll need to perform two test runs for LXD VMs: one with
-m "not lxc_use_exec"
against the default images, and one with-m lxc_use_exec
against an image withexec
support. This image could be the known-goodimages:
image, or we could consider manually upgrading the kernel in an instance launched from an official image and capturing that as an image which we then pass in. (I considered automating this last step in the test framework: create and snapshot such an image the first time we hit anlxc_use_exec
test, and then reuse it each time: if we introduce the more general "image catalogue" support we've discussed somewhat in the past, this could be a good candidate for it.)Checklist: