-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: document how to add OAuth with OauthProxy in front of Capacitor #39
Comments
Hey @laszlocph, first wanted to say thanks for this project! I was considering switching to Capacitor from the Weave GitOps Dashboard, and was curious about this one. Weave had a lot of functionality that hooked into k8s RBAC with user/group impersonation through OIDC groups - are there plans to do anything similar with Capacitor? For example, if groups or some attribute gets passed through OAuth/OIDC, would we be able to use that for permissions? General use case is a cluster admin vs a developer using Capacitor, where developers may not need to see things in the If this is the wrong place to ask no worries, I'm happy to move this to a separate issue 🙌 |
Hello @evandam thanks for your question. The tldr is that we are investigating the how, and over time this will land in Capacitor. This question popped up on Kubecon a couple of times, and you summarized it perfectly for Github. To respect Kubernetes RBAC in Capacitor we need to do two things:
An alternative to Kubernetes RBAC would be Capacitor's own RBAC, but it is considered as a subpar option. We need to be certain about the user's identity Then we need to impersonate the user in our Kubernetes API calls. This is fairly understood how it can be done with client-go. Here is a codesample on how Weave did it: https://github.com/weaveworks/weave-gitops/blob/2af6d1133a5717fc3b0f367734443557074b1548/pkg/kube/config_getter.go#L74 If we ever cache API responses and share them between users, the client-go equivalent of |
@laszlocph , nice project, also landed here from weave-gitops. I would second @evandam on this and vote for this nice feature (if users could only see the flux resources to what they have access to). Flux has a multi-tenancy deployment model that is considered best practice from security point of view when you have multiple teams using the platform. Implementing something that supports that model sounds like the right way to go. |
For what it's worth, I thought having Dex do the heavy lifting for OIDC and even controlling access (required GitHub org, groups, etc.) was a really good way to handle it, without weighing down the GitOps/Capacitor configuration and code base (I'm assuming). |
@laszlocph do you think we should create a new ticket about this request? By now, this seems like way more than just a documentation issue... (and possibly not good for a first issue) |
I've done this in the past with some toy applications. It generally boils down to:
Doing this, all authentication is deferred to Kubernetes/OIDC provider and Capacitor doesn't need to much. I'd be happy to take a shot at it if you'd like. |
Hello everyone! |
No description provided.
The text was updated successfully, but these errors were encountered: