-
Notifications
You must be signed in to change notification settings - Fork 694
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
uwsgi master graceful shutdown #1974
Comments
Shouldn't you kill the old version only after you deploy the new pod? |
Its part of k8s rolling update + the time it take him to remove the Pod from the endpoint list, can't control |
@itielshwartz a quick google search revelead there's readinessProbe to avoid your issue |
Hi @xrmx sadly this is not the case , the readiness probes doesn't run every second , and between the time k8s send sigterm to the time the prob there is a time uwsgi isn't working. More then that after sigtrem the pod shouldn't get traffic , but it does as the ips table take time to update https://hackernoon.com/graceful-shutdown-in-kubernetes-435b98794461. So like I said this feature is ineeded needed... |
The readiness probe on the new pod should avoid killing the old before the new is up, on the old it should make k8s stop routing the traffic to it. If that does not work you should probably fix your cluster configuration no? Adding an arbitrary delay on process teardown seems a hack. |
Hi @xrmx did you read the link I sent - https://hackernoon.com/graceful-shutdown-in-kubernetes-435b98794461.? It is (maybe) a hack, but a one that is not really custom to me (it is important to anyone who want 100% uptime when rolling a new version with k8s) |
@itielshwartz That's a post from 2017, it does not feel that authoritative sorry. Again, if you are sending SIGTERM and still sending traffic does not look right to me. But hey I'm not here to convince you. |
@xrmx I totally agree with
But as this is k8s deafult i don't think theres any choise. i just wanted to know is uwsgi can provide me a solution, one of the reason i opened this issue is beacuse (i guess) other people will face the same problem... If uwsgi has no solution, i will use the k8s preStop hook, feel more hacky to me (as i can't test it locally, and it's very k8s), but it will solve the issue. about the post, it is indeed from 2017, but as the first commenter stated:
Also heres the ongoing issue he mentioned: kubernetes-retired/contrib#1140 (comment)
Anyway thx for trying to help (and for uwsgitop!) |
BTW if you are on uwsgi 2.0 you should add |
Thanks I'm using it :) Anyway issue is resovled (will use hooks) thanks |
Hi @itielshwartz , can you share your solution with the hooks? |
Hi, @amramtamir check the stackoverflow answer: |
I'm running uwsgi+flask application, The app is running as a k8s pod.
When i deploy a new pod (a new version), the existing pod get SIGTERM.
This causes the master to stop accepting new connection at the same moment, what causes issues as the LB still pass requests to the pod (for a few more seconds).
I would like the master to wait 30 sec BEFORE stop accepting new connections (When getting SIGTERM) but couldn't find a way, is it possible?
My uwsgi.ini file:
Also asked on stackoverflow: https://stackoverflow.com/questions/54459949/uwsgi-master-graceful-shutdown but no luck
The text was updated successfully, but these errors were encountered: