-
Notifications
You must be signed in to change notification settings - Fork 532
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
controller should reconcile the status of PodGroup #166
Comments
@cwdsuzhou @denkensk could you take some time to look into the above 2 issues? |
I will open a PR to resolve these issues |
/assign |
to resolve the about the 2nd issue, I have different opinions. If a group has been WDYT? |
Thanks. Please raise a PR fixing that.
It sounds strange to use different states to show whether a PodGroup has been scheduled successfully once or not. Shouldn't the state just stick to its actual state, in a "stateless" manner? |
This does not work. I raise a PR, just add default value to status |
Usually, a PG would be related to a job, from the perspective of a job, job status should not be from |
/reopen I think the workflow of how status changes need to rethink a bit. Probably it should be accompanied by informational messages, so that users can know what phase a PodGropu is in, for what reason. |
@Huang-Wei: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle rotten |
/assign I would like to have a try. |
Is it scenario as below is correct? Actually I am really confuse about the PodGroup status. If my understand is correct, I would like to create a doc to describe first. Then I would like to fix the code issue for it :) Case 1: Case 2: |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
In the latest code, the controller doesn't quite reconcile the status of PodGroup:
Field
status.Scheduled
not displayed - fixed by controller should reconcile the status of PodGroup #166If a PodGroup is created, we'd expect all status fields to display properly. But for now it's only
status.phase
.status.phase
not reconciled wellIf a PodGroup doesn't have any associated Pods, it should stay in
Pending
state. However, it sometimes stays inRunning
(or other state). We's expect status to be always reconciled./kind bug
The text was updated successfully, but these errors were encountered: