-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Custom CA Bundle Injection into DSP via DSPA #440
Conversation
A new image has been built to help with testing out this PR: To use this image run the following: cd $(mktemp -d)
git clone git@github.com:opendatahub-io/data-science-pipelines-operator.git
cd data-science-pipelines-operator/
git fetch origin pull/440/head
git checkout -b pullrequest 2c3f9d3515cef6c8548b05a1fb65b4c54c831baf
make deploy IMG="quay.io/opendatahub/data-science-pipelines-operator:pr-440" More instructions here on how to deploy and test a Data Science Pipelines Application. |
Change to PR detected. A new PR build was completed. |
This change will allow users to specify a configmap that contains a ca bundle. This ca bundle is injected into the api server pod.
Change to PR detected. A new PR build was completed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: rimolive The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold |
/hold |
I didn't see this. The first thing that happens is that the health check fails.
Searching the logs for the word "cabundle" turns up empty.
edit: I was hitting a separate known issue where |
verified. Pipeline artifacts are written to my self-signed minio. Feel free to unhold |
/unhold |
mmh, it would have been good if that were to be using the central Cluster additionalCA and system CA bundle, via configmap, as done in Notebooks. Now here, you have another configmap where the PEM content needs to be in. Essentially, in notebooks, the odh operator does the creation of the secret in the namespace and the mapping of its content into the notebook container volume automatically without defining the secret by name in OdhDashboardConfig. Your approach is to define the secret in DataSciencePipelines CR, to reference any configmap by name via CR configmapKey. Which is fine, but now, there is additional config settings work. The advantage: You way, you can link up any secret even if you don't have access to Cluster Proxy Config. |
Long term, we are pursuing a global cluster-level approach. Whether that uses the Proxy or not is TBD. A future global cluster-level approach does not obviate the need for component-level control as well. |
First of all: it is great that one is able to configure custom CAs and self-signed certs in DSPA CR now, extremely helpful, thank you for the hard work.
cluster-level Proxy config for CAs is just a gathering point for publicly-trusted CAs that come with Openshift and don't have to be configured plus then additional trusted CAs that are not publicly trusted or even just self-signed certs.
Agreed: in case customers use third-party operators to collect and inject CAs, the flow could be: use Proxy config CA-bundle if nothing is defined in a field in the component config CR at component-level; good idea and approach. By the way: the CAs and self-signed certs defined in proxy config and injectable together with core Red Hat OS based publicly trusted CA via configmap via the cluster network operator at cluster level can be applied and used as a source of trust for any connection, regardless of whether they go over HTTP_PROXY targets or NO_PROXY targets. Similar to how in detail imagestreams work, I sometimes get the impression that the possibility to add own certs and CAs via Proxy config is not very well-known and documented by RedHat, especially that all-in-one bundle (public system plus own additional) injection mechanism. |
The issue resolved by this Pull Request:
Resolves #362
Description of your changes:
Add CA Injection capabilities to api server pod and pipeline s3 copy steps.
Testing instructions
Deploy DSPO
Deploy a Minio in a self-signed external OCP cluster (preferablly different from the dsp host cluster)
On DSP host cluster, deploy a dspa, first without the cabundle:
dspa.yaml
Check DSPO logs, you should see an error mentioning x509 and instructions on adding a cabundle.
From the minio self-signed ocp cluster get the ca bundle from
minio
namespace:Put this value in a configmap:
Deploy this in the DSPA namespace. Then update the dspa to be:
dspa.yaml
Once deployed you will see the DSPA get deployed successfully, run an iris pipeline, make sure the artifact passing is successful (does s3 data get storec successfully? does pipeline run to completion) ?
Also confirm no regression, (if it's an http minio, does that still work?) etc.
Checklist