From ccfaa95e0ccb6beb8de941210191b9c05f6aef9d Mon Sep 17 00:00:00 2001 From: Breda McColgan <93921684+bredamc@users.noreply.github.com> Date: Wed, 17 Apr 2024 13:10:21 +0100 Subject: [PATCH] ENG-3998: Updates for project details redesign (#26) * Updates for project details redesign * Updates for project details redesign: models * Updates for project details redesign: workbenches, pipelines, and data connections --- .../modules/ROOT/pages/creating-a-workbench.adoc | 4 ++-- .../pages/creating-data-connections-to-storage.adoc | 7 ++----- .../pages/deploying-a-model-multi-model-server.adoc | 4 ++-- .../deploying-a-model-single-model-server.adoc | 6 +++--- .../ROOT/pages/enabling-data-science-pipelines.adoc | 9 +++------ ...nning-a-pipeline-generated-from-python-code.adoc | 9 ++------- .../pages/running-a-script-to-install-storage.adoc | 2 +- .../pages/setting-up-your-data-science-project.adoc | 13 +++++++------ .../modules/ROOT/pages/testing-the-model-api.adoc | 3 +-- 9 files changed, 23 insertions(+), 34 deletions(-) diff --git a/workshop/docs/modules/ROOT/pages/creating-a-workbench.adoc b/workshop/docs/modules/ROOT/pages/creating-a-workbench.adoc index d4903dc..f528b7c 100644 --- a/workshop/docs/modules/ROOT/pages/creating-a-workbench.adoc +++ b/workshop/docs/modules/ROOT/pages/creating-a-workbench.adoc @@ -11,7 +11,7 @@ A workbench is an instance of your development and experimentation environment. . Navigate to the project detail page for the data science project that you created in xref:setting-up-your-data-science-project.adoc[Setting up your data science project]. -. Click the *Create workbench* button. +. Click the *Workbenches* tab, and then click the *Create workbench* button. + image::workbenches/ds-project-create-workbench.png[Create workbench button] @@ -43,7 +43,7 @@ image::workbenches/create-workbench-form-button.png[Create workbench button] .Verification -In the project details page, the status of the workbench changes from `Starting` to `Running`. +In the *Workbenches* tab for the project, the status of the workbench changes from `Starting` to `Running`. image::workbenches/ds-project-workbench-list.png[Workbench list] diff --git a/workshop/docs/modules/ROOT/pages/creating-data-connections-to-storage.adoc b/workshop/docs/modules/ROOT/pages/creating-data-connections-to-storage.adoc index 377ffa3..7c27024 100644 --- a/workshop/docs/modules/ROOT/pages/creating-data-connections-to-storage.adoc +++ b/workshop/docs/modules/ROOT/pages/creating-data-connections-to-storage.adoc @@ -23,9 +23,7 @@ Create data connections to your two storage buckets. . In the {productname-short} dashboard, navigate to the page for your data science project. -. Under *Components*, click *Data connections*. - -. Click *Add data connection*. +. Click the *Data connections* tab, and then click *Add data connection*. + image::projects/ds-project-add-dc.png[Add data connection] @@ -49,8 +47,7 @@ image::projects/ds-project-pipeline-artifacts-form.png[Add pipeline artifacts fo .Verification - -Check to see that your data connections are listed in the project. +In the *Data connections* tab for the project, check to see that your data connections are listed. image::projects/ds-project-dc-list.png[List of project data connections] diff --git a/workshop/docs/modules/ROOT/pages/deploying-a-model-multi-model-server.adoc b/workshop/docs/modules/ROOT/pages/deploying-a-model-multi-model-server.adoc index 50a4646..5d4116d 100644 --- a/workshop/docs/modules/ROOT/pages/deploying-a-model-multi-model-server.adoc +++ b/workshop/docs/modules/ROOT/pages/deploying-a-model-multi-model-server.adoc @@ -9,9 +9,9 @@ .Procedure -. In the {productname-short} dashboard, navigate to the *Models and model servers* section of your project. +. In the {productname-short} dashboard, navigate to the project details page and click the *Models* tab. + -image::model-serving/ds-project-model-list-add.png[Models and model servers] +image::model-serving/ds-project-model-list-add.png[Models] . In the *Multi-model serving platform* tile, click *Add model server*. diff --git a/workshop/docs/modules/ROOT/pages/deploying-a-model-single-model-server.adoc b/workshop/docs/modules/ROOT/pages/deploying-a-model-single-model-server.adoc index dcc6b75..93d39a0 100644 --- a/workshop/docs/modules/ROOT/pages/deploying-a-model-single-model-server.adoc +++ b/workshop/docs/modules/ROOT/pages/deploying-a-model-single-model-server.adoc @@ -10,11 +10,11 @@ .Procedure -. In the {productname-short} dashboard, navigate to the *Models and model servers* section of your project. +. In the {productname-short} dashboard, navigate to the project details page and click the *Models* tab. + -image::model-serving/ds-project-model-list-add.png[Models and model servers] +image::model-serving/ds-project-model-list-add.png[Models] -. Under *Single-model serving platform*, click *Deploy model*. +. In the *Single-model serving platform* tile, click *Deploy model*. . In the form, provide the following values: .. For *Model Name*, type `fraud`. .. For *Serving runtime*, select `OpenVINO Model Server`. diff --git a/workshop/docs/modules/ROOT/pages/enabling-data-science-pipelines.adoc b/workshop/docs/modules/ROOT/pages/enabling-data-science-pipelines.adoc index 675e674..114b34d 100644 --- a/workshop/docs/modules/ROOT/pages/enabling-data-science-pipelines.adoc +++ b/workshop/docs/modules/ROOT/pages/enabling-data-science-pipelines.adoc @@ -15,7 +15,7 @@ In this {deliverable}, you implement an example pipeline by using the JupyterLab . In the {productname-short} dashboard, click *Data Science Projects* and then select *Fraud Detection*. -. Navigate to the *Pipelines* section. +. Click the *Pipelines* tab. . Click *Configure pipeline server*. + @@ -38,13 +38,13 @@ You must wait until the pipeline configuration is complete before you continue a .Verification -Check the *Pipelines* page. Pipelines are enabled when the *Configure pipeline server* button no longer appears. +Check the *Pipelines* tab for the project. Pipelines are enabled when the *Configure pipeline server* button no longer appears. image::projects/ds-project-create-pipeline-server-complete.png[Create pipeline server complete] [NOTE] ==== -If you have waited more than 5 minutes and the pipeline server configuration does not complete, you can try to delete the pipeline server and create it again. +If you have waited more than 5 minutes and the pipeline server configuration does not complete, you can try to delete the pipeline server and create it again. image::projects//ds-project-delete-pipeline-server.png[Delete pipeline server] ==== @@ -56,6 +56,3 @@ xref:creating-a-workbench.adoc[Creating a workbench and selecting a notebook ima //xref:automating-workflows-with-pipelines.adoc[Automating workflows with data science pipelines] //xref:running-a-pipeline-generated-from-python-code.adoc[Running a data science pipeline generated from Python code] - - - diff --git a/workshop/docs/modules/ROOT/pages/running-a-pipeline-generated-from-python-code.adoc b/workshop/docs/modules/ROOT/pages/running-a-pipeline-generated-from-python-code.adoc index b3fe6aa..6a44770 100644 --- a/workshop/docs/modules/ROOT/pages/running-a-pipeline-generated-from-python-code.adoc +++ b/workshop/docs/modules/ROOT/pages/running-a-pipeline-generated-from-python-code.adoc @@ -9,7 +9,7 @@ This {deliverable} does not delve into the details of how to use the SDK. Instea + * `7_get_data_train_upload.py` is the main pipeline code. * `get_data.py`, `train_model.py`, and `upload.py` are the three components of the pipeline. -* `build.sh` is a script that builds the pipeline and creates the YAML file. +* `build.sh` is a script that builds the pipeline and creates the YAML file. + For your convenience, the output of the `build.sh` script is provided in the `7_get_data_train_upload.yaml` file. The `7_get_data_train_upload.yaml` output file is located in the top-level `fraud-detection` directory. @@ -17,7 +17,7 @@ For your convenience, the output of the `build.sh` script is provided in the `7_ . Upload the `7_get_data_train_upload.yaml` file to {productname-short}. -.. In the {productname-short} dashboard, navigate to your data science project page and then click *Import pipeline*. +.. In the {productname-short} dashboard, navigate to your data science project page. Click the *Pipelines* tab and then click *Import pipeline*. + image::pipelines/dsp-pipeline-import.png[] @@ -52,8 +52,3 @@ A new run starts immediately and opens the run details page. image::pipelines/pipeline-run-in-progress.png[] There you have it: a pipeline created in Python that is running in {productname-short}. - - - - - diff --git a/workshop/docs/modules/ROOT/pages/running-a-script-to-install-storage.adoc b/workshop/docs/modules/ROOT/pages/running-a-script-to-install-storage.adoc index 0bd5cc9..671dd5d 100644 --- a/workshop/docs/modules/ROOT/pages/running-a-script-to-install-storage.adoc +++ b/workshop/docs/modules/ROOT/pages/running-a-script-to-install-storage.adoc @@ -19,7 +19,7 @@ NOTE: If you want to connect to your own storage, see xref:creating-data-connect You must know the OpenShift resource name for your data science project so that you run the provided script in the correct project. To get the project's resource name: -In the {productname-short} dashboard, select *Data Science Projects* and then hover your cursor over the *?* icon next to the project name. A text box appears with information about the project, including it's resource name: +In the {productname-short} dashboard, select *Data Science Projects* and then click the *?* icon next to the project name. A text box appears with information about the project, including its resource name: image::projects/ds-project-list-resource-hover.png[Project list resource name] diff --git a/workshop/docs/modules/ROOT/pages/setting-up-your-data-science-project.adoc b/workshop/docs/modules/ROOT/pages/setting-up-your-data-science-project.adoc index 1bd428f..9a37bbf 100644 --- a/workshop/docs/modules/ROOT/pages/setting-up-your-data-science-project.adoc +++ b/workshop/docs/modules/ROOT/pages/setting-up-your-data-science-project.adoc @@ -23,21 +23,22 @@ image::projects/ds-project-new-form.png[New data science project form] .Verification -You can now see its initial state. There are five types of project components: +You can now see its initial state. Individual tabs provide more information about the project components and project access permissions: image::projects/ds-project-new.png[New data science project] ** *Workbenches* are instances of your development and experimentation environment. They typically contain IDEs, such as JupyterLab, RStudio, and Visual Studio Code. -** A *Cluster storage* is a volume that persists the files and data you're working on within a workbench. A workbench has access to one or more cluster storage instances. +** *Pipelines* contain the data science pipelines that are executed within the project. -** *Data connections* contain configuration parameters that are required to connect to a data source, such as an S3 object bucket. +** *Models* allow you to quickly serve a trained model for real-time inference. You can have multiple model servers per data science project. One model server can host multiple models. + +** *Cluster storage* is a persistent volume that retains the files and data you're working on within a workbench. A workbench has access to one or more cluster storage instances. -** *Pipelines* contain the Data Science pipelines that are executed within the project. +** *Data connections* contain configuration parameters that are required to connect to a data source, such as an S3 object bucket. -** *Models and model servers* allow you to quickly serve a trained model for real-time inference. You can have multiple model servers per data science project. One model server can host multiple models. +** *Permissions* define which users and groups can access the project. .Next step xref:storing-data-with-data-connections.adoc[Storing data with data connections] - diff --git a/workshop/docs/modules/ROOT/pages/testing-the-model-api.adoc b/workshop/docs/modules/ROOT/pages/testing-the-model-api.adoc index 86d43ee..ef9b194 100644 --- a/workshop/docs/modules/ROOT/pages/testing-the-model-api.adoc +++ b/workshop/docs/modules/ROOT/pages/testing-the-model-api.adoc @@ -6,7 +6,7 @@ Now that you've deployed the model, you can test its API endpoints. .Procedure -. In the {productname-short} dashboard, navigate to the project details page and scroll down to the *Models and model servers* section. +. In the {productname-short} dashboard, navigate to the project details page and click the *Models* tab. . Take note of the model's Inference endpoint. You need this information when you test the model API. + @@ -24,4 +24,3 @@ If you deployed your model with single-model serving, follow the directions in ` xref:automating-workflows-with-pipelines.adoc[Automating workflows with data science pipelines] xref:running-a-pipeline-generated-from-python-code.adoc[Running a data science pipeline generated from Python code] -