Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix an ability to use TLS with K8s infra and helm chart #13946

Merged
merged 8 commits into from
Jul 30, 2019

Conversation

sleshchenko
Copy link
Member

@sleshchenko sleshchenko commented Jul 22, 2019

What does this PR do?

It's ready to review but note that I'm still testing these changes for different combinations of configuration.
The corresponding pull request will be created to chectl before merging this.

It inlcudes the following changes that are separated in different commits to make review easier:

  1. useCertManager, useStaging, email was related to ClusterIssuer but I discovered that single and default host server strategies are not working at all, but only that configurations allows to set up certificate manager easily.
    With multi-host server strategy, wildcard certificates are required, which require DNS challenge to be done and it is really specific to DNS that is used on installation. With some DNS providers, we can quite easily configure automated way of DNS challenge (like Cloud DNS), on some user has to do it manually. So, I do not see a straightforward and clean way how we can write Helm Chart to handle it for all installations. This definitely can be improved and automated, but I believe that requiring a secret with certificate ready-to-use and provide instruction on how to get it on some installation is a good working start point.
    So, creating of certificate and clusterissuer and related values are removed from chart.

  2. There is a default strategy for workspaces namespaces, to create a workspace in a dedicated namespace. In such configuration, user would need to set up replication for TLS certificate whole all Che namespaces
    OR (it's implemented here) Che Server may propagate certificate for workspaces, it just needs to know private key and certificate. It includes Che Server + Helm Chart changes

  3. During testing I've found that cheNamespace is not used anywhere, then investigated a bit when it was used, and discovered that it's the outdated name of cheWorkspacesNamespace.
    See a48d4b4#diff-7e81184b59cb2730e14842271ecf4261R34
    19ecb7d
    So, it's removed. To get namespace where Che is located, .Release.Namespace can be used in templates.

  4. When TLS is configured, Che Server must be provided with che-tls certificate and if this cert is self-signed there is no need to request self-signed secret with the same content as we have. So, now che-tls secret is reused for CHE_SELF_SIGNED_CERT env var.

Example command to test my changes:

helm upgrade --install che --force --namespace che --set global.ingressDomain=che.gcps.my-ide.cloud  --set cheImage=sleshchenko/che-server:tls --values=./values/tls.yaml -f=./values/multi-user.yaml ./

I've tested the following scenario:

  1. Deploy Che and make sure that all pods are run successfully.
  2. Open dashboard.
  3. Create and run a workspace from Java Maven stack.
  4. Make sure that workspace is started and Theia is available.

I tried it with the following configurations:

  1. HTTPS Multi-user Che where workspaces are run in dedicated namespaces;
  2. HTTPS Multi-user Che where workspaces are run in the same namespace as Che Server;
  3. HTTPS Multi-user Che where workspaces are run in dedicated namespace;
  4. HTTPS Single-user Che where workspaces are run in dedicated namespaces;
  5. HTTPS Single-user Che where workspaces are run in the same namespace as Che Server;
  6. HTTPS Single-user Che where workspaces are run in dedicated namespace;
  7. Self-Signed certificate Multi-user Che where workspaces are run in the dedicated namespace;
    With self-signed certificate, I've got successfully deployed Che but even after import of self-signed certificate into browser, I tried to open Che and got this error
    Screenshot_20190725_152046
    This is quite a known issue, but I guess some extra actions need to be done during certificate generating but I'm not sure that I should investigate it more now.
    After accepting each Che host as trusted, and tried to start a workspace I got another error on Plugin Broker side. A dedicated issue is created to test self-signed certificates on Kubernetes infrastructure Test self-signed certificates for Che on Kubernetes/OS deployed with chectl (helm/operator installers) #14035

What issues does this PR fix or reference?

These issues were found during #13869

It solves #13986, #13996, #13997

Release Notes

N/A

Docs PR

N/A

@sleshchenko
Copy link
Member Author

ci-build

@sleshchenko sleshchenko changed the title Change TLS configuring in helm chart Fix an ability to use TLS with K8s infra and helm chart Jul 26, 2019
@sleshchenko
Copy link
Member Author

@benoitf @l0rd It's ready to review and tested. Could you please review?
@benoitf It would be cool if you're able to test it on your GCP as well. Note that che-tls secret should be created with wildcard TLS certificate.

@benoitf
Copy link
Contributor

benoitf commented Jul 26, 2019

I'm on PTO until monday

Copy link
Contributor

@benoitf benoitf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested on GCP
I've updated document for GCP / TLS installation

Signed-off-by: Sergii Leshchenko <sleshche@redhat.com>
Signed-off-by: Sergii Leshchenko <sleshche@redhat.com>
Signed-off-by: Sergii Leshchenko <sleshche@redhat.com>
Signed-off-by: Sergii Leshchenko <sleshche@redhat.com>
Signed-off-by: Sergii Leshchenko <sleshche@redhat.com>
Signed-off-by: Sergii Leshchenko <sleshche@redhat.com>
Signed-off-by: Sergii Leshchenko <sleshche@redhat.com>
Signed-off-by: Sergii Leshchenko <sleshche@redhat.com>
@sleshchenko
Copy link
Member Author

ci-build

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants