-
Notifications
You must be signed in to change notification settings - Fork 49
Add ability to provide oidc configuration #182
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small nits. I think adding a smoke test like described makes sense.
be060ea
to
363cf28
Compare
All right, added then. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm if it works.
Pulled some additional reviewers for one more approval before merging. |
@invidian Why not expose the variables needed for dex, rather than allowing user to specify any apiserver flag? Are there any unforeseen issues with being so much flexible? Here I don't feel there is a lot of control over what is allowed and what is not. If you look at the hosted K8s providers they have control over what is allowed and what is not. |
I see. But IMHO adding things is easier vs removing them/changing them in backwards incompatible way. So I still think adding specific dex related flags/parameters is the right way to go! |
I think @surajssd has a point here, if we add freeform extra flags we might create a support burden, increase the feature matrix and expose the user to unnecessary complexity. This is Typhoon's reasoning. OTOH projects like kubeadm do offer this kind of flexibility, but one can argue that they're a "building block" technology and need to be more flexible for that reason. After thinking about it, maybe our best option for now is allowing extra flags but doing some validation them so we only support the ones we need and error out if others are passed. In this way we restrict what's possible, we don't need to change the implementation too much, and we don't expose tons of flags which be confusing. If in the future we support automatic generation of OIDC parameters for dex+gangway we can just tell users to remove the snippet, if we move to separate OIDC parameters we can tell them to switch to those. I'd like to get some more opinions though. |
I like the following approach, as it gives user freedom for adding any kind of parameters they need, which would be super useful in cases, where we don't have dedicated parameter for some flags. I do agree, that we shouldn't support setups, which use this flag, as user can put anything in there. Perhaps we could document, that one should use it at their own risk?
That doesn't seem right to me. I'd rather change the implementation to only accept data we need then. |
I too agree with Suraj here that we should be careful what with we add just because its the easiest step right now. Can we not put the oidc related flags as optional fields in the cluster configuration ? |
363cf28
to
af9f0c2
Compare
I mostly agree with @invidian here, specially on this: if we are going to allow only some things (like validate only some flags are exposed), let's expose them nicely instead of that very hacky approach (and probably more complex because of checking that those strings are what we "allow"). Regarding what @surajssd mentions about managed service having that kind of control, I want to add THAT is one of the biggest complains for managed services: users can't add flags (for auth and others). And this is not a managed service, so why not allow the user to take the risks he wants? (I don't have a source for this, just my mem from mailing list and reddit). I think in general there are two approaches for this: "abstract" as much as possible for the users and provide a nice and clean interface, doing kind of a "whitelist" of params user can tune, or abstract some things but allow any random flags they want to use. Envoy is quite complex, so for example contour is only exposing very simple things and it hides the envoy complexity. That has the overhead of every single change needs to be properly integrated and is blocked until it is. And the positive thing that everything you configure is properly integrated in contour (like in it's CRD). While nice, I'm not sure the trade-off is good for kubernetes. Specially if you think in alpha features: kubernetes is constantly adding them, for example. You are already managing your cluster, why wouldn't you be able to enable an alpha feature you may benefit from? An alpha feature I guess some will need is IPv4/IPv6 dual-stack. Or a feature we currently don't support but needs api flag is Kubernetes Auditing. I don't think we want to prevent people from using them, though. As long as this doesn't expand our test matrix and we warn the user that it may cause instability and, even if we want say it is not supported (like, open a bug report only if you can reproduce without that)... I don't see why not do it. I see the Typhoon decision against this: "can create reliance on alpha features/behaviors that get canned, and can cause instability that lowers the quality bar". Maybe we can reconsider if it causes lot of bug reports? Also, in my experience when using kubernetes, I really needed to change flags some times. There was a bug in the linux kernel for CFS and another in Kubernetes in the way that configured CPU limits, that made latency sensitive apps impossible to run and one of the ways to make it work (this bug took more than an year to be solved in k8s) was adding a flag to the kubelet. Hopefully, in kops I couls specify that flag very easily (in the yaml) for compoenents, it would have been a deal breaker in that case that kops didn't allow that. So, IMHO, I'd even go further and suggest to consider adding an extra_flags params to all k8s components in the future (another discussion for another time :)). Kubernetes has TON of features, and even more flags... I don't think we plan to provide a nice interface for ALL of them (and it expands quite fast). And one of the biggest disadvantage of managed services I've read people complain is the restriction to control flags (again, no sources, just what I've read in mailing list and reddit). Therefore, I lean towards allowing users to take risk, by having a "extra_flags" param, and warn that is not supported/tainted or something. What do others think? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code-wise, LGTM (we definitely need that refactor!). Some small comments/questions.
Of course, let's wait to agree/disagree on the general approach for merge/close the PR.
Thanks in any case @invidian for the PR! :)
...lokomotive-kubernetes/bootkube/resources/charts/kube-apiserver/templates/kube-apiserver.yaml
Show resolved
Hide resolved
2c2bd03
to
bd5fe0b
Compare
If we decide to expose only parameters required for OIDC, which parameters should we expose exactly? Just 'issuer URL' or all of them? And if so, how do we expose them? as oidc {
issuer_url = ...
} Then, if we use block, should we put it in inside I think Terraform implementation can stay generic as it is right now. We only need to change user-facing interface. |
oidc block seems like a reasonable tradeoff between not giving user an option to pass all sorts of api server flags and lokomotive controlling what flags can be provided. |
And which parameters should be exposed then via the block? All of them or only |
IMHO, all of them. We can't force users to use this auth method or nothing. If we want to expose those, we should expose them all so they can configure other OIDC providers. And use sensible defaults (like for the other params use the values we will use for our auth by default) for the others, IMHO. We have an interface that for users using the component is as easy as possible, but with flexibility for other params to be tuned if users don't want this auth. Does it make sense? |
Yes, thanks. Then, do you think I can implement it in the following way: cluster "packet" {
...
}
oidc {
issuer_url = "..."
} I would like to start build more reusable blocks and not duplicate the parameters across platforms. |
Duplication can be a little bit reduced by using a embedding a common struct in the platform struct, and reusing parsing functions, probably. But there will be some boilerplate code per platform. I think this impacts on how syntax will be for multi-cluster. But, trying to be pragmatic, I think we are already doing this with other fields (at least backend) and as long as we don't add many more, it should be easy to move them back into cluster config if we decide that is the path (not sure). So, IMHO, both options are fine. The big warning for embedding in cluster block would be to not duplicate all code (i.e. using common structs and parsing functions, if the end code is not a PITA to work with or some other smarter trick. I don't know, maybe in any case there is too much boilerplate?). |
a3e5b5e
to
8d74b26
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One issue I found during testing.
8d74b26
to
5681bf8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving. Since the policy is for two approvers. I cannot merge despite being the second person to approve as I am the co-author.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Few nits otherwise LGTM.
@@ -105,6 +107,10 @@ func NewConfig() *config { | |||
} | |||
} | |||
|
|||
func (c *config) clusterDomain() string { | |||
return fmt.Sprintf("%s.%s", c.ClusterName, c.DNSZone) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this not take into account the variable cluster_domain_suffix
? What if user provides some other cluster_domain_suffix
?
Correct me if I am wrong, I am just trying to identify potential discrepancy between variables.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think cluster_domain_suffix
is needed here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cluster_domain_suffix
is for cluster internal DNS, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes.
5681bf8
to
86dd93e
Compare
560af76
to
84ebae9
Compare
This commit adds an ability to provide extra flags for self-hosted kube-apiserver, so this feature can be used when configuring dex and gangway for OIDC authentication to the cluster, as currently any manual changes to kube-apiserver pods will be overwritten during cluster upgrade. Update generated-assets. Signed-off-by: Mateusz Gozdek <mateusz@kinvolk.io> Signed-off-by: Imran Pochi <imran@kinvolk.io>
With 'terraform fmt'. Signed-off-by: Mateusz Gozdek <mateusz@kinvolk.io>
- To configure an OIDC based authentication for Lokomotive, oidc block is added to the Lokomotive configuration. Currently 4 fields are added as part of the oidc block with defaults - issuer_url (default is "https://dex.<CLUSTER_DOMAIN>") - client_id (default is "gangway") - username_claim (default is "email") - groups_claim (default is "groups") `ConfigureOIDCFlags` sets the defaults if not provided by user and returns the list of flags that should be appened to the Kubernetes API Server extra flags (KubeAPIServerExtraFlags) `Validate` validates on two conditions: - issuer_url is a valid url - url scheme is https. Signed-off-by: Imran Pochi <imran@kinvolk.io>
- Add `oidc` block to aws cluster config for configuring oidc based authentication on aws. - configures the KubeAPIServerExtraFlags that needs to be configured, in this case we need to configure the oidc block and add oidc related flags to KubeAPIServerExtraFlags. - configure the oidc block, only if present in the configuration. - Update documentation regarding same. - Update ci configuration to make use of the same. - Update example configuration for production use. Signed-off-by: Imran Pochi <imran@kinvolk.io>
- Add `oidc` block to packet cluster config for configuring oidc based authentication on packet. - configures the KubeAPIServerExtraFlags that needs to be configured, in this case we need to configure the oidc block and add oidc related flags to KubeAPIServerExtraFlags. - configure the oidc block, only if present in the configuration. - Update documentation regarding same. - Update ci configuration(packet and packet-arm) to make use of the same. - Update example configuration for production use. Signed-off-by: Imran Pochi <imran@kinvolk.io>
- Add `oidc` block to baremetal cluster config for configuring oidc based authentication on baremetal. - Enables validation of the configuration with `checkValidConfig`. - configures the KubeAPIServerExtraFlags that needs to be configured, in this case we need to configure the oidc block and add oidc related flags to KubeAPIServerExtraFlags. - configure the oidc block, only if present in the configuration. - Update documentation regarding same. - Update ci configuration to make use of the same. Signed-off-by: Imran Pochi <imran@kinvolk.io>
- Update how-to guide with oidc block configuration. Signed-off-by: Imran Pochi <imran@kinvolk.io>
84ebae9
to
a728b1a
Compare
@surajssd please re-review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
This PR adds the ability to provide oidc configuration for Lokomotive supported platforms in the form of
oidc
block inside thecluster
block of Lokomotive configuration.This feature can then be used to remove the manual step of adding extra flags to apiserver when configuring dex and gangway and also the manual changes were not persistent during cluster upgrade.
Closes #279
Signed-off-by: Mateusz Gozdek mateusz@kinvolk.io
Signed-off-by: Imran Pochi imran@kinvolk.io
TODO:Add some tests? I was thinking about adding following snippet to CI configuration for smoke tests:UPDATE - Removed the
kube_apiserver_extra_flags
from tests as mentioned above since we are exposingoidc
block. I have added an empty oidc block in ci configuration so that apiserver starts with oidc configured with default values.TODO:
Signed-off-by: Mateusz Gozdek mateusz@kinvolk.io
Signed-off-by: Imran Pochi imran@kinvolk.io