-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPIKE] storage controller need to support ca cert for self sign signed certificate #61
Comments
Hi @Jooho Does this mean that once this is implemented we are deprecating the And does that secret need to be typed by the user? Just in case we need to adapt the dashboard for that. Thanks for pinging me out, we can start a conversation to start implementing this. |
@lucferbux No, it just adds 1 more param to data connection UI. |
Ok, I guess that's fairly straightforward but we will need ux for this. Are we supposed to be able to upload the cert as a file or something? |
It should be the same as openshift secret menu. It supports |
cc @vconzola @kywalker-rh I'm pinging you guys here, once model serving finish the spike we can start opening our trackers. |
@vconzola @kywalker-rh I think we might wanna start to take a look. |
We have an on-going point about CA bundles. I would like for this to be more holistic. DSPA, Model Serving, Elyra Notebooks... they are all suffering from the same CA issue -- I wonder if we can handle this more at the ODH Deployment level rather than at each component. cc @jwforres |
I agree with @andrewballantyne. |
Thanks for pinging me @lucferbux. I just created a UX issue to start looking at it (opendatahub-io/odh-dashboard#1825). Since this is Projects/Admin related I have assigned @xianli123 to it for now, but that could change depending on workload. |
@kywalker-rh I will keep an eye on this issue. |
Implementation is done in serving team. The rest part is UI. This issue is just for tracking purpose so do not add any sprint label. |
For posterity, I mentioned this on the slack thread. I'm a little confused on what the UI should do here... I think a cluster solution is what we were after at the last AC. Adding the CA_BUNDLE to the Data Connection is a bandaid and not intended to be a solution afaik. We are happy to take on the effort if it is needed as a real solution and not a stop-gap, UX has some plans for S3-compatible flows (standard AWS S3 data connections wouldn't have this field). |
I agree with andrew, this solution could be misleading, since Data Connections are used in other resources (pipelines, notebooks) and having that field would require us to have a special treatment for model serving. With no context, if we allow custom certs in data connections the user would assume it will work with everything. We can add this as a middle step, but we would need a broader solution soon. |
From modelserving perspective, we are ok to use any other possible ways because odh-model-controller will convert the value into secret. But one thing that I concerned about is the behavior of the field After KServe is merged, I think we need to send a similar pr to modelmesh to make it the same way. |
Migrated to: https://issues.redhat.com/browse/RHOAIENG-1001 |
Upstream ModelMesh provides a feature to set a self-signed certificate, but there is no input field for this in the ODH UI. As a workaround, attempting to update this manually was not successful because the odh-model-controller kept continuously updating the
storage-config
secret.Therefore, we need two modifications through the odh-model-controller:
These changes will allow us to address the issue effectively.
For the UI integration(@lucferbux), this will be the secret example:
The text was updated successfully, but these errors were encountered: