-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-tenancy enforcement settings #422
Comments
While I can understand your desire, and would like to help find a fitting solution. I am personally opposed to adding flags that define and/or overwrite behaviors for our Custom Resources. Reason being that it results in loose definitions, and manifests that will behave differently based on the controller that takes care of them. As we are working on expanding our cross-namespace model in a way that it is both more extensible and securely configurable, I am wondering if the planned fluxcd/flux2#1704 doesn't cover your needs. |
Hi @hiddeco, thanks and I understand your concerns.
Unfortunately I don't think it does. We don't want to predefine sources for our customers. We'd like to give them first-class flux experience with as few tweaks as possible, allowing them creating sources/kustomizations/helm charts they want while being protected with K8s RBAC rules that we have already defined. This is more flexible and lowers operational effort on our end. That's why SA impersonation feels like a great fit. We thought multi tenancy setup would be ideal, but we'd like to avoid employing webhooks. Especially for enforcing security. I could argue that this feature doesn't really change the reconciliation behaviour, it "only" adds restrictions on what is allowed. I understand validation webhook seems to the right thing to prevent this but (whenever possible) we prefer to use them on the best-effort manner only to improve the UX. And as of today I doubt we use webhooks to enforce any security related policy hence my concern. I'm happy to to discuss it in a more efficient/appropriate way if you have a suggestion. TL;TR I get where you are coming from, but it feels to me that the SA impersonation (which seems to be one of the main features) is not that useful without a reliable way to enforce it. OFC besides use-cases like preventing human errors. I'm happy to discuss further. |
Kubernetes replaced Pod Security Policy with a validation webhook. If webhooks are not an option for you, how are you going to enforce things like privileged containers, host mounts, etc? |
FYI: the multi-tenancy enforcement will happen once we settle on #349 and yes, it's behind a feature flag. |
https://medium.com/@LachlanEvenson/hands-on-with-kubernetes-pod-security-admission-b6cac495cd11 |
@stefanprodan just to be clear the multi-tenancy enforcement will be separate from #349 correct? We'd be happy to go with the webhook for the time being with the proper enforcement being on the horizon. Do you have any sort of timeline? Would you expect it to land in the next few months? |
I'd like to request a feature to create a setting(s) to enforce multi-tenancy.
Motivation:
We would like to allow our customers to use flux as GitOps engine in our Management Clusters (in CAPI terms).
Right now flux components can impersonate Service Account which is great. But without requiring:
spec.serviceAccountName
to be setThis is not really effective way to enforce multi-tenancy in isolation.
Current state:
As described here this can be achieved with validation webhooks (or intermediate engine like kyverno or OPA).
We don't feel comfortable with setting our kyverno failurePolicy to fail and we don't use it for anything security-related.
Proposal:
Add a setting to
kustomize-controller
andhelm-controller
to enforce settingspec.serviceAccountName
andmetadata.namespace == spec.sourceRef.namespace
.Correct me if I'm wrong but it doesn't seem like a big change but can enhance multi-tenancy setup quite a bit allowing secure setup without webhooks involved.
Please let me know if this is a feature you'd be willing to accept.
The text was updated successfully, but these errors were encountered: