-
Notifications
You must be signed in to change notification settings - Fork 12
Support in-cluster ingress proxies #67
Comments
IMHO we should not try to support custom solutions, which do not write back the ingress status. By the way it seems nginx controller does currently support updating ingress status (even though could not find it in the documentation): On the other hand, it might make sense to support custom producers, which serve as a source of service/ingress objects, but then K8S API should not be used at all IMO. |
@tolbrino seems like the |
Strange. I'll try again and troubleshoot a little bit. |
Looks like there are two nginx based ingress controllers in https://github.com/kubernetes/contrib/tree/master/ingress/controllers. |
So version |
That's lovely to hear. We should probably create an issue for |
Agreed. This seems like the way it should work. |
I'm sorry for the my confusion, but upon further testing I noticed the following issue. |
@tolbrino we just gave So, you are right, the controller puts the external IP of the node where the controller pod is currently running on into the address field of the ingress. This is because the controller pod also runs the nginx that's serving the traffic. Consequently it sets up If you use it in conjunction with However, your controller pod could die and be rescheduled on another node. Once it comes up it will change the IP address of all your ingresses to the new node IP. Here, You will have a downtime for all your ingresses whenever your controller pod dies 💔 but if you don't care about that then you can just wait for DNS to propagate and find all of your services at the same DNS name after a while. If |
@linki Thanks for the thorough explanation. I guess I didn't test my setup properly, since I wasn't able to hit my proxied services via |
@tolbrino Meanwhile you can use this feature branch if you like. If you're advantageous you can try it out in combination with https://github.com/linki/armor-ingress-controller which also gives you |
Mate
completely relies on data from the Kubernetes API to create DNS records. E.g. it gets allIngress
objects from your cluster, inspects the rules and external load balancer IP and creates the corresponding A records.This works fine on GKE and GCE with Google's GLBC controller (https://github.com/kubernetes/contrib/tree/master/ingress/controllers/gce) managing their load-balancer-as-a-service as it updates the assigned IP of the load balancer to the status section of the
Ingress
object (https://github.com/kubernetes/contrib/blob/c7d250db7f17b8bd9f5d5eca66ddc1c6522d4e4e/ingress/controllers/gce/controller/controller.go#L364-L407).However, in-cluster proxies that are used to implement the ingress functionality seem to not update the external IP of the corresponding
Ingress
objects, e.g.traefik
andnginx
(see #64 (comment)). It needs to be clarified if this is desired or ifMate
should handle those cases better.Nevertheless, on AWS it's possible to use https://github.com/zalando-incubator/kube-ingress-aws-controller as an in-cluster ingress proxy which does update the status section and works well in conjunction with
Mate
.The text was updated successfully, but these errors were encountered: