Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple HA deployments in the namespace share the same leader, preventing ingress status updates #528

Closed
RichardoC opened this issue Apr 1, 2019 · 4 comments · Fixed by #534
Labels
bug An issue reporting a potential bug

Comments

@RichardoC
Copy link

Describe the bug
I've created a namespace networking and deployed two HA deployments (public and private)
The private deployment has the following arguments

      - -ingress-class=nginx-private
      - -use-ingress-class-only=true
      - -report-ingress-status
      - -external-service=nginx-ingress-private
      - -enable-leader-election

The public deployment has the following arguments

      - -ingress-class=nginx-public
      - -use-ingress-class-only=true
      - -report-ingress-status
      - -external-service=nginx-ingress-public
      - -enable-leader-election

When deployed, the following configmaps are created

$ kubectl --namespace=networking get configmap
NAME DATA AGE
leader-election 0 6d
nginx-config 4 7d

These mean that only one ingress-class gets a leader, and that the nginx-config is shared between them despite differences.

To Reproduce
Steps to reproduce the behavior:
Deploy using helm, and add the above options.

Expected behavior
Each configmap include the ingress-class so that they aren't shared between deployments

Your environment

  • Version of the Ingress Controller - controller.image.tag": "1.4.3-alpine"
  • Version of Kubernetes 1.10
  • Kubernetes platform (e.g. Mini-kube or GCP) KOPS
  • Using NGINX

Additional context
Add any other context about the problem here. Any log files you want to share.

@pleshakov pleshakov added the bug An issue reporting a potential bug label Apr 2, 2019
@pleshakov
Copy link
Contributor

@RichardoC

We can change the name of the configmap resource used for leader election to the value of the ingress-class or introduce a new cli-argument to make it configurable.

However, there is a problem with the helm chart -- it doesn't assume that multiple Ingress Controllers can be deployed either in a single namespace or in different namespaces. The problem exists because the helm chart creates the auxiliary resources (the configmap, secrets, RBAC-related resources) with the same names. Thus, if you install multiple helm releases, you will get collisions among the created resources.

This problem -- allowing deploying multiple Ingress Controller in the same or different namespaces via the Helm chart -- is something that we can solve. It will require updating the helm chart and solving the leader election configmap problem.

For your case specifically, to avoid the leader election configmap problem, can deploying the Ingress Controllers in different namespaces provide a workaround?
Just to make sure I understand your use case, is HA the reason you have two deployments of Ingress Controller? Can HA be accomplished through a single deployment?

@RichardoC
Copy link
Author

Currently, we're using the a namespace per HA deployment as a work around rather than having them both in one namespace (networking) which is our normal deployment.

The reason we spotted this is that we have both public (Ie externally routable) and private (Ie only VPC) deployments in the same namespace (networking).

I've just discovered that it's not just the leader election configmap that is shared, it's also the nginx-config configmap.

Your suggested change sounds good, and if the nginx-config configmap could also be an argument that would be even better.

@pleshakov
Copy link
Contributor

pleshakov commented Apr 2, 2019

@RichardoC thanks for providing more details.

the IC supports the -nginx-configmaps argument, which allows choosing a different configmap to use. However, the helm chart always creates a configmap with the same name.

We will make changes to fix the problem with leader election and extend the helm chart to support deploying multiple Ingress Controllers.

@RichardoC
Copy link
Author

Many thanks @pleshakov

paigr pushed a commit to paigr/kubernetes-ingress that referenced this issue Apr 9, 2019
By default, a ConfigMap with the name `leader-election` is used. This
can cause problems if multiple deployments of the Ingress controller
exist within the same namespace. See nginx#528
Dean-Coakley pushed a commit that referenced this issue Apr 10, 2019
* Add ConfigMap name to values.yaml

The value is used for `.metadata.name` in the ConfigMap yaml, as well as
with the `--nginx-configmaps` flag for the container.

* Add option to specify leader election lock name

By default, a ConfigMap with the name `leader-election` is used. This
can cause problems if multiple deployments of the Ingress controller
exist within the same namespace. See #528
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug An issue reporting a potential bug
Projects
None yet
2 participants