Skip to content
This repository was archived by the owner on Nov 1, 2022. It is now read-only.

helm-operator: install resources to different namespace than HelmRelease itself #2128

Closed
AndriiOmelianenko opened this issue Jun 6, 2019 · 24 comments · Fixed by #2334
Closed
Labels

Comments

@AndriiOmelianenko
Copy link

Describe the feature
When using plain helm to install resources, user can specify next:
--tiller-namespace - namespace where the tiller is located.
--namespace - namespace where resources should be installed.

so basically, when I want to use tiller in kube-system ns and install resources from chart to, let's say, development ns, I can do helm install --tiller-namespace kube-system --namespace development.

however, if I'm using helm-operator and flux, I don't see where can I do that.
I can only specify namespace for HelmRelease resource, and all the things from my helm chart will be installed to the same ns where helmrelease is located.

Expected behavior
Add possibility to configure helm --namespace ... feature, for example, user can specify separately namespace where tiller and helm-operator are located and separately namespace, where the resources should be installed:

---
apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
  name: rabbit
  namespace: kube-system
spec:
  releaseName: rabbitmq
  namespace: some-other-namespace
  chart:
    repository: https://kubernetes-charts.storage.googleapis.com/
    name: rabbitmq
    version: 3.3.6
  values:
    replicas: 1
@AndriiOmelianenko AndriiOmelianenko added blocked-needs-validation Issue is waiting to be validated before we can proceed enhancement labels Jun 6, 2019
@hiddeco
Copy link
Member

hiddeco commented Jun 11, 2019

What is the reason you want to put your release somewhere else than where your HelmRelease lives?

@stefanprodan
Copy link
Member

This has serious implication with GC enabled, deleting a namespace where the HelmRelease is will delete the release from another namespace. I'm personally against this as it conflicts with how Kubernetes CRD are intended to work, a custom resources if it's scoped to a namespace should trigger operations only in that namespace.

@AndriiOmelianenko
Copy link
Author

AndriiOmelianenko commented Jun 11, 2019

the main use case we have regarding this issue is next:
we want to have "admin" repository and flux in kube-system which is applying helm releases from that repository.
and in that admin repository we want to store yaml definitions for namespaces and separate fluxes for that namespaces.

so basically, in admin repo we have

  • namespace team1
  • flux helm release for repository team1
  • namespace team2
  • flux helm release for repository team2

and main flux in kube-system will apply those resources without manual human helm install

@stefanprodan
Copy link
Member

Ok so put the HelmRelease file next to the namespace definition.

@AndriiOmelianenko
Copy link
Author

@stefanprodan I've tried, but not successful.

so in main admin repo I have two files, right?

  1. yaml for namespace called team1
  2. HelmRelease for flux.

what namespace should I specify in that crd resource for flux helmrelease?

  • if kube-system, it will be applied by main flux to kube-system namespace, which is not what we want at all.
  • if team1 - resource HelmRelease will be installed to team1 namespace, but there is no helm-operator to install actual helmRelease

@stefanprodan
Copy link
Member

stefanprodan commented Jun 11, 2019

In kube-system you should have Flux, Helm Operator and Tiller running.

Repo structure:

team1 dir:

  • namespace team1
  • HelmRelease flux.team1

team2 dir:

  • namespace team2
  • HelmRelease flux.team2

The teams Flux HelmRelease should not contain the Helm Operator.

@AndriiOmelianenko
Copy link
Author

the main problem is that in those namespaces we want to have it's own tillers and helm-operators installed

and our helm-operator in kube-system is only limited to watching crd in kube-system namespace.

this way we can give access to team only for team1 namespace (ns admin, for example).
and the team will have it's own tiller (so they can do helm installs) and their own flux+helm-operator configured for their repository.

@hiddeco
Copy link
Member

hiddeco commented Jun 11, 2019

You can deploy two Helm operators in your kube-system namespace, both connecting to a Tiller instance / managing resources in a different team-ns.

@stefanprodan
Copy link
Member

I don't see how this can be done with HelmReleases since it's a chicken-egg situation. In your admin repo I would place yamls for Flux, Helm Operator and Tiller. Your dev teams can use HelmReleases but not the admins.

@squaremo
Copy link
Member

This is not going to happen as a feature as described in the title. I've made it a question about how to construct the system described in the comments. @AndriiOmelianenko Do you feel that you have enough advice to make progress?

@AndriiOmelianenko
Copy link
Author

sorry guys, I'm still not getting it how to set up it, when I have tiller in each namespace (in kube-system, in team1, in team2).

if I use only one helm-operator in kube-system (and set --allow-namespace to all (currently it is limited only to kube-system)), is it going to install helm releases to kube-system's tiller, even if crd helmRelease will be found in other namespace (e.g. team1)?

@AndriiOmelianenko
Copy link
Author

here is the visualized flow we want to implement

Screen Shot 2019-06-13 at 12 34 25 PM

@stefanprodan
Copy link
Member

stefanprodan commented Jun 13, 2019

@AndriiOmelianenko the solution I proposed above boils down to this: the admin git repository can't contain HelmReleases for installing Flux in a different namespaces. Instead of using HelmReleases you can put the YAMLs generated with helm template flux --namespace team1 > flux-team1.yaml in the admin repo. These YAMLs will be applied by the Flux-admin and will deploy Flux-team1 and Helm-Operator-team1.

@hiddeco hiddeco removed the blocked-needs-validation Issue is waiting to be validated before we can proceed label Jun 14, 2019
@dcherman
Copy link

dcherman commented Aug 5, 2019

Linking in a discussion from Slack where I had a similar request - https://weave-community.slack.com/archives/C4U5ATZ9S/p1565024811282000

Essentially, I want to have a helm-operator that operates on a single namespace, but deploys those releases to separate namespaces for better security/isolation. I do not want to restrict creation of HelmRelease CRDs so that other teams may operate their own helm-operator if they wish to do so.

What I want to accomplish is similar to the Application of Applications model that ArgoCD supports, however the helm integration in the helm-operator is better here as it supports all of the good things like helm hooks that a lot of charts require.

Out of curiosity, why would it be surprising that deleting the namespace containing the HelmRelease (which would implicitly delete the helm release), would trigger deletion/cleanup of the actual release/resources? The HelmRelease essentially has ownership over any resources it creates, so I'm not sure why that'd be a problem.

@stealthybox
Copy link
Member

stealthybox commented Aug 5, 2019

custom resources if it's scoped to a namespace should trigger operations only in that namespace.

@stefanprodan I don't think this is a strict rule.
HelmReleases already break this rule. (explained later)

There are other useful controllers that break the rule on purpose.
The main one I think of is Contour:
https://github.com/heptio/contour/blob/master/design/ingressroute-design.md#goals

Support delegating the configuration of some or all the routes for a virtual host to another Namespace

"IngressRoute Delegation" allows administrators to enforce policies about how domains are routed in a target namespace from the admin namespace. This allows others full control of the other objects in the target namespace while preventing the Ingress Controller from being misconfigured.

I believe what @dcherman is asking for is a valid use-case that mirrors the same pattern.
Basically "HelmRelease Delegation".
There are some interesting capabilities here:

  • It allows creating parent-child relationships between namespaces. (You can even build full trees of HelmReleases recursively allowing you to structure your Namespaces.)
  • RBAC is simpler to reason about -- you control access to HelmReleases from a single namespace
  • It isolates the HelmRelease that defines the workload (using the Tiller SA) from the workload's namespace.
    (Consider a Chart where Version A contains a HelmRelease that installs Version B and vice-versa: infinite-loop!)
  • Deleting a team's infra is as simple as deleting the parent/admin namespace.

In general, HelmReleases are simply used to instruct Helm to mutate its own storage backend. Helm uses that storage and manages its own GC for the Release's target namespace, so we shouldn't conflate the HelmRelease's namespace and the Releases's target namespace when talking about what affects k8s GC.


It should be acknowledged that with the current helm-operator, hooked up to Tiller with a ClusterRoleBinding that has edit access, a HelmRelease can already easily breach the single-Namespace boundary.
It's not well supported, but charts can already mutate resources in many namespaces.
The only protection you have from this is binding the Tiller SA to a single namespace.


Setting the Release's target namespace to the HelmRelease object's namespace is a great and sensible default.
It's sensible to add an optional field that overrides it.
It's not a breaking change.
It doesn't add more security attack surface since the paired Tiller is only bound by its SA, not the release's target namespace.
It enables more sophisticated organization and management of HelmReleases.
It allows more secure practices.

@knackaron
Copy link

Very well explained @stealthybox .

I'm also interested in this enhancement. I'm currently working around it for the cluster infrastructure by using a helm-operator-controller that watches for namespace CRUDs and puts a helm-operator in each namespace for each infrastructure component that is namespace-isolated, barring exclusion by annotations. My CD pipelines create the namespace they are targeting and then drop the HelmRelease in, which is then picked up by the local helm-operator as soon as its up. Project/team namespaces are ignored by label selectors because they already get their own namespaced Tiller and helm-operator since they don't have access to the system Tiller.

@stealthybox
Copy link
Member

stealthybox commented Aug 7, 2019

One edge case to adding this feature is that it's possible to create two different HelmReleases in different namespaces that manage the same helm release. If the values or versions are different, it could flap badly.

We could help prevent this by adding targetNamespace to the name.
(Flux already does this for the HelmRelease's Namespace.)
It would still be possible to cause a collision by creating a HelmRelease in the targetNamespace of another externally defined release.

It's already possible to do this with flux right now by causing two {Namespace}-{Name}'s to overlap.
Adding this feature does make it more easy to mis-configure this.

@dcherman
Copy link

dcherman commented Aug 7, 2019

Is it possible to detect that two different HelmReleases are attempting to control the same helm release? If so, one potential workaround could be a validating admission webhook that would reject attempts to create a HelmRelease that has a naming collision with a release controlled by a different one.

@stealthybox
Copy link
Member

Oh nevermind, you can override Spec.ReleaseName to whatever you want, so it's already very possible to configure this badly.

@stealthybox
Copy link
Member

Is it possible to detect that two different HelmReleases are attempting to control the same helm release?

It's really not possible since there are multiple datastores with differing ideas.
A user may configure a cluster with multiple helm-operators and tillers with different storage backends.

Those separate tillers can have releases with the same name differing in charts or their target namespace.

@stealthybox
Copy link
Member

Helm 3 will store the Release CR in the target Namespace.
Supporting resources in other namespaces is still an open question.
https://github.com/helm/community/blob/master/helm-v3/003-state.md

@hiddeco
Copy link
Member

hiddeco commented Aug 8, 2019

Oh nevermind, you can override spec.ReleaseName to whatever you want, so it's already very possible to configure this badly.

I built in a safeguard for this, see: #2123

@AndriiOmelianenko
Copy link
Author

I'm still not quite getting one thing though:
why is this an issue to add the feature of availability to specify target namespace in helmRelease, as the main use case: to be able to create resources in target namespace using {{ .Release.Namespace }} and create resources in that ns.

currently, I am already able to create resources in other namespace, by strictly hardcoding namespace in chart definition (or transforming it though values.yaml).

so that means I'm already able to achieve my usecase with dirty workarounds in my own charts, but I can't do the same with public charts.

it would be great if instead of using tricks - we would be able to use this helm feature of target namespace using HelmReleases

stefanprodan pushed a commit to stealthybox/flux that referenced this issue Aug 12, 2019
stefanprodan added a commit that referenced this issue Aug 12, 2019
Support different targetNamespace for HelmRelease #2128
@knackaron
Copy link

Thanks @stefanprodan and @stealthybox. This will make configuring infrastructure components much cleaner for cluster operators.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants