Skip to content
This repository has been archived by the owner on May 2, 2024. It is now read-only.

WIP: use helm-unittest to add unit test for helm tpl test #112

Closed
wants to merge 3 commits into from

Conversation

DexterYan
Copy link
Member

@DexterYan DexterYan commented Jul 28, 2023

@DexterYan DexterYan marked this pull request as draft July 28, 2023 06:38
@chris-sanders
Copy link
Member

chris-sanders commented Jul 31, 2023

@DexterYan the main issue I saw with helm-unittest is how it expects to find objects: https://github.com/helm-unittest/helm-unittest/blob/main/DOCUMENT.md#test-job

The framework assumes a single document is generated from a single template. If that's not true, it falls back to letting you select from an index. With how the library chart works, generate all objects from a single template, which means the user would have to write all tests against document indexes selecting the document number for each test. This would be very brittle and difficult to refactor. Any change that added new objects could potentially break every existing test requiring new indexes be updated for each.

I am also a little concerned about the coverage for writing tests in this way. For example airflow used this and migrated off after finding it difficult to use (apache/airflow#11657 (comment)). In particular this user states you can't assert that something is not created, which is something I have a PR for right now #113 that needs to verify when you specify an existingClaim one is not generated.

That was several years ago, so possibly it's improved but I wanted to see if you've already considered this drawbacks.

@DexterYan
Copy link
Member Author

I have added the multiple service accounts tests. However, there is a bug for the current testing

        - should work

                - asserts[1] `equal` fail
                        Template:       replicated-library/templates/test_serviceaccounts.yaml
                        DocumentIndex:  1
                        Path:   metadata.name
                        Expected to equal:
                                RELEASE-NAME-replicated-library-example1
                        Actual:
                                RELEASE-NAME-replicated-library-example2
                        Diff:
                                --- Expected
                                +++ Actual
                                @@ -1,2 +1,2 @@
                                -RELEASE-NAME-replicated-library-example1
                                +RELEASE-NAME-replicated-library-example2


Charts:      1 failed, 0 passed, 1 total
Test Suites: 1 failed, 1 passed, 2 total
Tests:       1 failed, 1 passed, 2 total
Snapshot:    0 passed, 0 total
Time:        17.911542ms

if you not define the documentIndex, it will use the last one to throw error.

The issue is caused by https://github.com/helm-unittest/helm-unittest/blob/14210a3f1db6fe59b7a39fae6ffab5c83dbbcb4d/pkg/unittest/validators/equal_validator.go#L49, this function is using continue and looping the append the validateError, and the last error is also appended. Should be a easy fix.

For your concern about document number for indexing, I think it is possible to use userValues to format a key value pair of renders documents like service-example1. In that case, we can avoid indexing change.

https://github.com/helm-unittest/helm-unittest/blob/14210a3f1db6fe59b7a39fae6ffab5c83dbbcb4d/pkg/unittest/test_job.go#L182

They also did sort to ensure consistent index, https://github.com/helm-unittest/helm-unittest/blob/14210a3f1db6fe59b7a39fae6ffab5c83dbbcb4d/pkg/unittest/assertion.go#L43C34-L43C34.

@chris-sanders
Copy link
Member

chris-sanders commented Aug 2, 2023

if you not define the documentIndex, it will use the last one to throw error.

I'm not sure why you're calling this a bug, this is as far as I can tell from the product documentation by design. The documentIndex is required if you have more than one document.

I honestly don't see how we can proceed with this approach as long as the documentIndex is how multi docs are defined. If you think there's a way to address that's what you should be demonstrating. If that can't be done I don't think the rest of this matters.

Am I reading that correctly, this one test takes almost 18 seconds? The terratest branch I was commenting that the 1.5 seconds of runtime was high. If this scales linearly this could also completely disqualify this project from use. We can't have TDD that takes 10's of seconds per test or it will quickly become to much for people to use.

Strike this part, that's 17ms not 17s

Is there a reason you're leaning toward this method over terratest? I see a lot of red flags with this approach currently.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants