Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential update for helm test #1

Closed
lachie83 opened this issue Aug 9, 2018 · 2 comments
Closed

Potential update for helm test #1

lachie83 opened this issue Aug 9, 2018 · 2 comments

Comments

@lachie83
Copy link

lachie83 commented Aug 9, 2018

The idea behind helm test is to provide a simple red/green signal a successful chart install. In the case of consul - the health of the cluster. The way helm tests are implemented are via run to complete or (Jobs) with exit codes. We currently have the following (see below) which looks similar to the upstream chart. This test is broken due to rbac (we can fix that) but I'm wondering if there is a better way to run the test without the need to access the Kubernetes API. Potentially like the readiness probe? Looking for suggestions.

$ helm test $(helm last)
RUNNING: virulent-chicken-test-mtu5z
FAILED: virulent-chicken-test-mtu5z, run `kubectl logs virulent-chicken-test-mtu5z --namespace default` for more info
Error: 1 test(s) failed

laevenso@rue [(⎈ |1-10:default)] ~/sandbox/consul-k8s/helm on master
$ kubectl logs virulent-chicken-test-mtu5z --namespace default
1..1
not ok 1 Testing Consul cluster has quorum
# (in test file /tests/run.sh, line 2)
#   `[ `kubectl exec virulent-chicken-consul-server-0 consul members --namespace=default | grep server | wc -l` -ge "3" ]' failed
# Error from server: pods "virulent-chicken-consul-server-0" is forbidden: User "system:serviceaccount:default:default" cannot get pods in the namespace "default"
@mitchellh
Copy link
Contributor

Yeah, this has been on my TODO as well. There are a couple tests we can do:

1.) We can do a simple agent HTTP call, if you have client nodes enabled. This doesn't require any perms, just has to be deployed to the correct place.

2.) If client nodes aren't installed, we can use service discovery to the servers.

We can probably do both. Noted!

@mitchellh
Copy link
Contributor

Fixed this by using the Consul agent API. Its somewhat brittle in that it only works if you have Consul agents everywhere, but we can fix that up over time.

geobeau pushed a commit to geobeau/consul-k8s that referenced this issue Feb 25, 2021
Jira: STO-9658

 - spread hashicorp#1: 1 big (480Gb) instance (mems97) on 2m20300bf2
 - spread hashicorp#2: 2 normal (240Gb) instances (mems9[56]) on 2m20300bdx
 - spread hashicorp#3: 3 small (160Gb) instances (mems9[234]) on 2m20300bdw
 - spread hashicorp#2: 1 small and 1 normal instance (mems9[46]) on 2m20300bdr

Change-Id: I897c4e8084222c460f722c74ecc3532876a1efff
Signed-off-by: Jean-Francois Weber-Marx <jf.webermarx@criteo.com>
david-yu pushed a commit that referenced this issue Aug 18, 2021
* Main Consul k8s Readme and ArtifactHub readme cleanup
wilkermichael added a commit that referenced this issue Oct 14, 2022
wilkermichael added a commit that referenced this issue Oct 17, 2022
wilkermichael added a commit that referenced this issue Oct 18, 2022
wilkermichael added a commit that referenced this issue Oct 18, 2022
wilkermichael added a commit that referenced this issue Oct 18, 2022
wilkermichael added a commit that referenced this issue Oct 18, 2022
wilkermichael added a commit that referenced this issue Oct 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants