-
Notifications
You must be signed in to change notification settings - Fork 874
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do we manually verify that the expected value in the unittests is correct? #306
Comments
What is this code doing? manifests/tests/jupyter-web-app-base_test.go Line 451 in 76bd875
Looks like we are running kustomize to get the expected value. |
if values aren't replaced and one or more warnings are emitted
we should FAIL on that unit test. This is slightly complicated by the fact that we produce and run golang tests under overlays.
The unit tests are based on kustomize's unit test framework kusttestharness |
thanks @kkasravi
What is a support config file in this case? Do you mean all support KfDef YAML files? All of your comments sound like sound improvements. I'm still not clear how we would ensure that the expected values are correct. How do we fix the unittests so that its easy to verify by manual inspection that the expected values are correct? |
As far as vars go - it would be easy to grep for '$(' values |
How does grepping for "$(" allow me to verify that the expected output is correct? Shouldn't the unittests be doing something like taking a kustomization.yaml, params.env as input running kustomize build and then ensuring the generated output matches some expected golden output? Here's the deployment for the profiles controller It has several variables e.g. $(userid-header) in the command line argument. So my expectation is that using the unittest I can verify that kustomize is correctly substituting in the correct values for those variables and generating the appropriate command line argument. Looking at the unittest How can I verify that the expected value that the unittest is testing against is what I expect? I'd like to be able to inspect the unittest and verify that the unittest is using an expected value that ensures the parameters are correctly substituted. I'd expect that by looking at the unittest I could see that the input was params.env with $(userid-header) set to some value e.g. "fake-header" and that the expected value for the deployment had a command line like
|
so some kind of output like
and
with the substitutions in red or something? |
What does in read mean? Can we follow the normal structure of go uniitests? The normal structure of a go unittest would be to define a testCase struct that captures input and expected output. So in our case that might be something like the following
So kustomization would be the input to a kustomization file and params would be a list of the parameters, and kustomizeAppDir would be the directory of the application manifest. (We could add other inputs as necessary). expectedYamlDir would be a directory (or file) where I could find the fully evaluated YAML representing the expected output of kustomize build. So now to verify that the expected output is correct I can just look at expectedYamlDir. So that's where I'd expect to see the Deployment spec you listed above. So now if the test compares the output of kustomize build to that expectedYamlDir and the test passes then I know my kustomize files will produce the expected output (i.e. that the expected substitutions take place). Right now its not clear to me what the unittests are testing. I'm guessing they are just verifying that if you run kustomize you generate valid K8s specs. |
ok, I'll implement in #308 |
Thanks @kkasravi Repasting your [comment] (#296 (comment))
So IIUC; the way the unittests are currently designed is we take the kustomize packages at commit N and commit N-1; run kustomize build for both and then compare the output? |
correct. However the actual vs expected were reversed - see PR #325 |
I'm still not sure how I'm supposed to verify that the expected output is correct. It looks like make generate is creating a kustomize package manifests/tests/application-base_test.go Line 91 in 67eabbf
And not generating the expected output of kustomize. See also @krishnadurai 's comment I think my expectation is that
So a reviewer is checking the following
It looks like |
One of the ways of generating kustomize output is by using '-o' flag as suggested by the this PR: kubernetes-sigs/kustomize#960. This generates individual output files and can dump them into a folder. I suggest we could use this kustomize feature in This will allow us to maintain a set of golden output files and compare against them. I'll take a stab at implementing this. |
/kind feature |
@krishnadurai any update on this? I think I'm going to need these changes to the test infra before I can start refactoring the manifests (e.g. #962) to start removing some of the kfctl magic (#774) |
I created a one off example of what this might look like here: Next steps would be
|
/assign |
@krishnadurai I think I almost have something working in https://github.com/jlewi/manifests/tree/stacks
How does this compare to what you have or were planning on working on? What's the best way to combine efforts? Want me to split out these changes into their own branch/PR? |
@jlewi I was planning to pick this up and change https://github.com/kubeflow/manifests/blob/master/hack/gen-test-target.sh to a python script. I'll be looking forward to reviewing your work when it is ready.
I'm fine with a single PR. /unassign |
…AML resources. * Per kubeflow#306 to allow reviewers to verify that the expected output is correct we should check in the result of "kustomize build -o" so that reviewers can review the diff and verify that it is correct. * This also simplifies the test generation code; the python script generate_tests.py just recourses over the directory tree and runs "kustomize build -o" and checks in the output into the test_data directory. * This is different from what the tests are currently doing. * Currently what the generation scripts do is generate "kustomization.yaml" files and then generate the expected output from that when the test is run. * This makes it very difficult to validate the expected output and to debug whether the expected output is correct. * Going forward, per kubeflow#1014, I think what we want to do is check in test cases corresponding to kustomization.yaml files corresponding to various kustomizations that we want to validate are working correctly * Our generate scripts would then run "kustomize build" to generate expected output and check that in so that we can validate that the expected output is correct. * Also change the tests data structure so that it mirrors the kustomize directory tree rather than flattening the tests into the "tests" directory. * Fix kubeflow#683 * Right now running the unittests takes a long time * The problem is that we generate unittests for every "kustomization.yaml" file * Per kubeflow#1014 this is kind of pointless/redundant because most of these tests aren't actually testing kustomizations. * We will address this in follow on PRs which will add more appropriate tests and remove some of these unnecessary/redundant tests.
…AML resources. * Per kubeflow#306 to allow reviewers to verify that the expected output is correct we should check in the result of "kustomize build -o" so that reviewers can review the diff and verify that it is correct. * This also simplifies the test generation code; the python script generate_tests.py just recourses over the directory tree and runs "kustomize build -o" and checks in the output into the test_data directory. * This is different from what the tests are currently doing. * Currently what the generation scripts do is generate "kustomization.yaml" files and then generate the expected output from that when the test is run. * This makes it very difficult to validate the expected output and to debug whether the expected output is correct. * Going forward, per kubeflow#1014, I think what we want to do is check in test cases corresponding to kustomization.yaml files corresponding to various kustomizations that we want to validate are working correctly * Our generate scripts would then run "kustomize build" to generate expected output and check that in so that we can validate that the expected output is correct. * Also change the tests data structure so that it mirrors the kustomize directory tree rather than flattening the tests into the "tests" directory. * Fix kubeflow#683 * Right now running the unittests takes a long time * The problem is that we generate unittests for every "kustomization.yaml" file * Per kubeflow#1014 this is kind of pointless/redundant because most of these tests aren't actually testing kustomizations. * We will address this in follow on PRs which will add more appropriate tests and remove some of these unnecessary/redundant tests.
…AML resources. (#1016) * unittests should compare result of kustomize build to golden set of YAML resources. * Per #306 to allow reviewers to verify that the expected output is correct we should check in the result of "kustomize build -o" so that reviewers can review the diff and verify that it is correct. * This also simplifies the test generation code; the python script generate_tests.py just recourses over the directory tree and runs "kustomize build -o" and checks in the output into the test_data directory. * This is different from what the tests are currently doing. * Currently what the generation scripts do is generate "kustomization.yaml" files and then generate the expected output from that when the test is run. * This makes it very difficult to validate the expected output and to debug whether the expected output is correct. * Going forward, per #1014, I think what we want to do is check in test cases corresponding to kustomization.yaml files corresponding to various kustomizations that we want to validate are working correctly * Our generate scripts would then run "kustomize build" to generate expected output and check that in so that we can validate that the expected output is correct. * Also change the tests data structure so that it mirrors the kustomize directory tree rather than flattening the tests into the "tests" directory. * Fix #683 * Right now running the unittests takes a long time * The problem is that we generate unittests for every "kustomization.yaml" file * Per #1014 this is kind of pointless/redundant because most of these tests aren't actually testing kustomizations. * We will address this in follow on PRs which will add more appropriate tests and remove some of these unnecessary/redundant tests. * Cherry pick AWS fixes. * Regenerate the tests. * Fix the unittests; need to update the generate logic to remove unused tests to remove tests that aren't part of this PR. * Address comments. * Rebase on master and regenerate the tests.
#1016 has been merged so closing this. |
How are we supposed to verify that the expected values used by unittests is correct?
As an example: kubeflow/kubeflow#3986 appears to be due to a bug in the kustomize manifest resulting in environment variables not being set correctly to the variable value; instead they remained set to the variable name e.g. `$(userid-prefix)
I would expect this is something we test in the unittest.
So my expectation would be that I can look at the unittest and verify by inspection that the expected value is what it should be.
But when I look at the unittest
manifests/tests/centraldashboard-base_test.go
Line 76 in 76bd875
It looks like we are specifying the kustomize files (i.e. the input) and not the expected output.
For example here:
manifests/tests/jupyter-web-app-base_test.go
Line 242 in 76bd875
We are setting
The expected value for imagePullPolicy should not be $(policy). So either that is the input value and not the expected value or else our tests are wrong.
Can someone familiar with the unittest infrastructure help educate me?
My expectation is that unittests would work as follows
/cc @johnugeorge @yanniszark @kkasravi
The text was updated successfully, but these errors were encountered: