Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A deployment with all unschedulable pods should show up in revision status #3593

Closed
mdemirhan opened this issue Mar 29, 2019 · 4 comments
Closed
Assignees
Labels
area/API API objects and controllers kind/feature Well-understood/specified features, ready for coding.

Comments

@mdemirhan
Copy link
Contributor

/area API

If there are no resources remaining in a cluster to deploy a pod and when we try to scale up from zero, or when a user deploys a new revision, we should bubble up the information about the lack of capacity in revision status and make it easier for users to understand why their revisions might not be starting up correctly.

@mdemirhan mdemirhan added the kind/feature Well-understood/specified features, ready for coding. label Mar 29, 2019
@knative-prow-robot knative-prow-robot added the area/API API objects and controllers label Mar 29, 2019
@mattmoor
Copy link
Member

/assign @jonjohnsonjr

Jon, please let me know if this isn't within the scope you are tracking.

@jonjohnsonjr
Copy link
Contributor

This may have been fixed by #4191?

@jonjohnsonjr
Copy link
Contributor

Confirmed this was fixed by tainting all my nodes:

$ for node in $(kubectl get nodes -oname); do kubectl taint nodes $node key=value:NoSchedule; done

When trying to scale up a revision, I see this in the ksvc status:

  - lastTransitionTime: "2019-08-14T21:24:08Z"
    message: 'Revision "counter-gzhj8" failed with message: 0/2 nodes are available:
      2 node(s) had taints that the pod didn''t tolerate..'
    reason: RevisionFailed
    status: "False"
    type: Ready

🎉

To untaint the nodes:

$ for node in $(kubectl get nodes -oname); do kubectl taint nodes $node key:NoSchedule-; done

/close

@knative-prow-robot
Copy link
Contributor

@jonjohnsonjr: Closing this issue.

In response to this:

Confirmed this was fixed by tainting all my nodes:

$ for node in $(kubectl get nodes -oname); do kubectl taint nodes $node key=value:NoSchedule; done

When trying to scale up a revision, I see this in the ksvc status:

 - lastTransitionTime: "2019-08-14T21:24:08Z"
   message: 'Revision "counter-gzhj8" failed with message: 0/2 nodes are available:
     2 node(s) had taints that the pod didn''t tolerate..'
   reason: RevisionFailed
   status: "False"
   type: Ready

🎉

To untaint the nodes:

$ for node in $(kubectl get nodes -oname); do kubectl taint nodes $node key:NoSchedule-; done

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/API API objects and controllers kind/feature Well-understood/specified features, ready for coding.
Projects
None yet
Development

No branches or pull requests

4 participants