-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update health/liveness probes for tenant-proxy #692
Conversation
Signed-off-by: Ivan Polchenko <2119240+i5okie@users.noreply.github.com>
Signed-off-by: Ivan Polchenko <2119240+i5okie@users.noreply.github.com>
A general comment on implementing health endpoints for proxies. It's best practice to have the proxy's health endpoint pass through to the health endpoint of application the proxy is proxying. That way the proxy will not accept connections until it's associated application is ready to receive traffic. |
Should health here be confirming that acapy is up and receiving traffic as well? |
We believe that is the cause of the problem we've been observing, with probes killing the tenant-proxy pods.
|
Acapy has it's own health checks and doesn't accept connections if it's not in Ready state. |
@WadeBarnes @loneil this is really just meant to be a readiness/liveness probe for k8s to know when the nginx container is up, and to leave it alone. The issue we're facing right now is that the readiness/liveness probes try to GET If we implement a health check that depends on the upstream service AND it is used by the k8s probes, this will likely be an even more widespread scenario (aca-py is not ready, therefore nginx is to be considered not ready, k8s sends SIGTERM to both service, they restart, new iteration). I don't think this is what we want, but I may also be missing something obvious: would you mind elaborating more on how you would approach/resolve this problem? Additionally, if the proxy is just proxying requests as is, isn't the upstream service not accepting connections and responding appropriately basically doing what we need/want? |
In the case of a pure proxy the proxy's readiness should track the upstream application's readiness. Otherwise you can run into a situation where your proxy is ready and routing traffic to the upstream application when it is not ready. We do this with the Aries Mediator Instances for this exact reason. |
Is that mixing liveliness and readiness? No expert but isn't it So readiness should be that acapy is good to go as well? |
I think this is the key. We need two endpoints: @WadeBarnes where can we find the mediator example for @i5okie to take inspiration from? |
Every deployment has liveness and readiness probes that you are encouraged to configure. Vice versa, for the tenant-proxy pod. However, if we configure the readiness probe of tenant-proxy pods to get This circumvents the kubernetes mechanism meant to check readiness of the pod itself. In the scenario of having this deployed in production, lets say we have multiple acapy pods, and multiple tenant-proxy pods. |
|
Signed-off-by: Ivan Polchenko <2119240+i5okie@users.noreply.github.com>
Signed-off-by: Ivan Polchenko <2119240+i5okie@users.noreply.github.com>
Signed-off-by: Ivan Polchenko <2119240+i5okie@users.noreply.github.com>
Signed-off-by: Ivan Polchenko <2119240+i5okie@users.noreply.github.com>
|
Deployment URLs ready for review. |
/healthz
to NGINX configuration. Returns 200,{"status":"up"}