Skip to content
This repository has been archived by the owner on Jun 13, 2024. It is now read-only.

Add k8s_wait module #18

Closed
geerlingguy opened this issue Feb 11, 2020 · 8 comments
Closed

Add k8s_wait module #18

geerlingguy opened this issue Feb 11, 2020 · 8 comments
Labels
has_pr This issue has a related PR that may close it. type/enhancement New feature or request

Comments

@geerlingguy
Copy link
Collaborator

SUMMARY

Just wanted to throw this out here, since it's something that would be convenient in a number of circumstances.

A common pattern for K8s deployments is:

  1. Apply a manifest to create a new Deployment.
  2. Wait for all the Pods in this Deployment to be Ready.
  3. Do other stuff.

Currently, for number 2, you can futz around with the returned data from k8s or k8s_info and use until/retries to get something working, or you can do a more simple method (if you have kubectl available) using kubectl wait:

kubectl wait --for=condition=Ready pods --selector app=my-app-name --timeout=60s

At a basic level, I'd want something like:

- name: Wait for my-app-name pods to be ready.
  k8s_wait:
    for: condition=Ready
    type: pods
    selector:
      - app=my-app-name
    timeout: 60s

Something along those lines... not sure. But it would be nice to be able to specify this in a more structured way, and not have to rely on kubectl being present for a task like:

- name: Wait for my-app-name pods to be ready.
  command: >
    kubectl wait --for=condition=Ready
    pods --selector app=my-app-name --timeout=60s
  changed_when: false
ISSUE TYPE
  • Feature Idea
COMPONENT NAME

N/A

ADDITIONAL INFORMATION

N/A

@geerlingguy
Copy link
Collaborator Author

This feature request stems from geerlingguy/ansible-collection-k8s#5, where I was originally ruminating over it.

@fabianvf
Copy link
Collaborator

@geerlingguy does the wait parameter in the k8s module fall short?

@geerlingguy
Copy link
Collaborator Author

To be honest, I might have not been thinking about that parameter when I originally wrote up the issue in my collection repo... I think it was when I was writing up this example in Ansible for Kubernetes: https://github.com/geerlingguy/ansible-for-kubernetes/blob/master/cluster-local-vms/test-deployment.yml#L15-L27

I'll see if I can get that to work with the wait parameter instead...

@geerlingguy
Copy link
Collaborator Author

It works great, not sure why I didn't use the wait parameter earlier. Downstream issue: geerlingguy/ansible-for-kubernetes#32

Closing this request as it's redundant (and would just add complexity where none is needed).

@bergmannf
Copy link

I think a module like this might still be useful.

I ran into a case, where waiting for a resource that is not explicitly created via ansible is required - in my case I am deploying the MinIO Operator via operatorhub and the operator-lifecycle-manager.

In this case only some resources are created by ansible (see: https://operatorhub.io/install/minio-operator.yaml and https://operatorhub.io/operator/minio-operator) and the lifecycle manager reacts on these resources being created by starting the correct Pod.

So in that case I am currently running the following state (which works) to wait for the Pod to be up, before deploying a MinIO cluster:

- name: Wait for the operator for minio to become available
  command: 'kubectl wait --namespace minio --for=condition=Ready pods --selector app=minio-operator --timeout=600s'
  register: minio_pod
  until: minio_pod.stdout.find("condition met") != -1
  retries: 5
  delay: 5

@geerlingguy
Copy link
Collaborator Author

I'm... going to reopen this feature request. I ran into a condition today where it would be very useful for me to be able to observe a set of pods that I did not start/manage using k8s, but rather an operator set them up. I just need to see when the Pods are Ready, and it would be so nice to not have to install kubectl in the environment just to use kubectl wait.

@fabianvf
Copy link
Collaborator

@geerlingguy what do you think about adding the wait and wait_condition parameters to the k8s_info module?

@geerlingguy
Copy link
Collaborator Author

@fabianvf - Well, that would solve the issue for me, and avoid adding yet another module.

Akasurde added a commit to Akasurde/community.kubernetes that referenced this issue Sep 2, 2020
User can specify wait in k8s_info module.

Fixes: ansible-collections#18

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
Akasurde added a commit to Akasurde/community.kubernetes that referenced this issue Sep 2, 2020
User can specify wait in k8s_info module.

Fixes: ansible-collections#18

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
Akasurde added a commit to Akasurde/community.kubernetes that referenced this issue Sep 2, 2020
User can specify wait in k8s_info module.

Fixes: ansible-collections#18

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
Akasurde added a commit to Akasurde/community.kubernetes that referenced this issue Sep 4, 2020
User can specify wait in k8s_info module.

Fixes: ansible-collections#18

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
Akasurde added a commit to Akasurde/community.kubernetes that referenced this issue Sep 4, 2020
User can specify wait in k8s_info module.

Fixes: ansible-collections#18

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
Akasurde added a commit to Akasurde/community.kubernetes that referenced this issue Sep 4, 2020
User can specify wait in k8s_info module.

Fixes: ansible-collections#18

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
Akasurde added a commit to Akasurde/community.kubernetes that referenced this issue Sep 4, 2020
User can specify wait in k8s_info module.

Fixes: ansible-collections#18

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
@tima tima added the has_pr This issue has a related PR that may close it. label Sep 8, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
has_pr This issue has a related PR that may close it. type/enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants