Skip to content

Commit

Permalink
ENG-3998: Updates for project details redesign (#26)
Browse files Browse the repository at this point in the history
* Updates for project details redesign

* Updates for project details redesign: models

* Updates for project details redesign: workbenches, pipelines, and data connections
  • Loading branch information
bredamc authored Apr 17, 2024
1 parent b9b5daf commit ccfaa95
Show file tree
Hide file tree
Showing 9 changed files with 23 additions and 34 deletions.
4 changes: 2 additions & 2 deletions workshop/docs/modules/ROOT/pages/creating-a-workbench.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ A workbench is an instance of your development and experimentation environment.

. Navigate to the project detail page for the data science project that you created in xref:setting-up-your-data-science-project.adoc[Setting up your data science project].

. Click the *Create workbench* button.
. Click the *Workbenches* tab, and then click the *Create workbench* button.
+
image::workbenches/ds-project-create-workbench.png[Create workbench button]

Expand Down Expand Up @@ -43,7 +43,7 @@ image::workbenches/create-workbench-form-button.png[Create workbench button]

.Verification

In the project details page, the status of the workbench changes from `Starting` to `Running`.
In the *Workbenches* tab for the project, the status of the workbench changes from `Starting` to `Running`.

image::workbenches/ds-project-workbench-list.png[Workbench list]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,7 @@ Create data connections to your two storage buckets.

. In the {productname-short} dashboard, navigate to the page for your data science project.

. Under *Components*, click *Data connections*.

. Click *Add data connection*.
. Click the *Data connections* tab, and then click *Add data connection*.
+
image::projects/ds-project-add-dc.png[Add data connection]

Expand All @@ -49,8 +47,7 @@ image::projects/ds-project-pipeline-artifacts-form.png[Add pipeline artifacts fo


.Verification

Check to see that your data connections are listed in the project.
In the *Data connections* tab for the project, check to see that your data connections are listed.

image::projects/ds-project-dc-list.png[List of project data connections]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@

.Procedure

. In the {productname-short} dashboard, navigate to the *Models and model servers* section of your project.
. In the {productname-short} dashboard, navigate to the project details page and click the *Models* tab.
+
image::model-serving/ds-project-model-list-add.png[Models and model servers]
image::model-serving/ds-project-model-list-add.png[Models]

. In the *Multi-model serving platform* tile, click *Add model server*.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,11 @@

.Procedure

. In the {productname-short} dashboard, navigate to the *Models and model servers* section of your project.
. In the {productname-short} dashboard, navigate to the project details page and click the *Models* tab.
+
image::model-serving/ds-project-model-list-add.png[Models and model servers]
image::model-serving/ds-project-model-list-add.png[Models]

. Under *Single-model serving platform*, click *Deploy model*.
. In the *Single-model serving platform* tile, click *Deploy model*.
. In the form, provide the following values:
.. For *Model Name*, type `fraud`.
.. For *Serving runtime*, select `OpenVINO Model Server`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ In this {deliverable}, you implement an example pipeline by using the JupyterLab

. In the {productname-short} dashboard, click *Data Science Projects* and then select *Fraud Detection*.

. Navigate to the *Pipelines* section.
. Click the *Pipelines* tab.

. Click *Configure pipeline server*.
+
Expand All @@ -38,13 +38,13 @@ You must wait until the pipeline configuration is complete before you continue a

.Verification

Check the *Pipelines* page. Pipelines are enabled when the *Configure pipeline server* button no longer appears.
Check the *Pipelines* tab for the project. Pipelines are enabled when the *Configure pipeline server* button no longer appears.

image::projects/ds-project-create-pipeline-server-complete.png[Create pipeline server complete]

[NOTE]
====
If you have waited more than 5 minutes and the pipeline server configuration does not complete, you can try to delete the pipeline server and create it again.
If you have waited more than 5 minutes and the pipeline server configuration does not complete, you can try to delete the pipeline server and create it again.
image::projects//ds-project-delete-pipeline-server.png[Delete pipeline server]
====
Expand All @@ -56,6 +56,3 @@ xref:creating-a-workbench.adoc[Creating a workbench and selecting a notebook ima
//xref:automating-workflows-with-pipelines.adoc[Automating workflows with data science pipelines]

//xref:running-a-pipeline-generated-from-python-code.adoc[Running a data science pipeline generated from Python code]



Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,15 @@ This {deliverable} does not delve into the details of how to use the SDK. Instea
+
* `7_get_data_train_upload.py` is the main pipeline code.
* `get_data.py`, `train_model.py`, and `upload.py` are the three components of the pipeline.
* `build.sh` is a script that builds the pipeline and creates the YAML file.
* `build.sh` is a script that builds the pipeline and creates the YAML file.
+
For your convenience, the output of the `build.sh` script is provided in the `7_get_data_train_upload.yaml` file. The `7_get_data_train_upload.yaml` output file is located in the top-level `fraud-detection` directory.

. Right-click the `7_get_data_train_upload.yaml` file and then click *Download*.

. Upload the `7_get_data_train_upload.yaml` file to {productname-short}.

.. In the {productname-short} dashboard, navigate to your data science project page and then click *Import pipeline*.
.. In the {productname-short} dashboard, navigate to your data science project page. Click the *Pipelines* tab and then click *Import pipeline*.
+
image::pipelines/dsp-pipeline-import.png[]

Expand Down Expand Up @@ -52,8 +52,3 @@ A new run starts immediately and opens the run details page.
image::pipelines/pipeline-run-in-progress.png[]

There you have it: a pipeline created in Python that is running in {productname-short}.





Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ NOTE: If you want to connect to your own storage, see xref:creating-data-connect

You must know the OpenShift resource name for your data science project so that you run the provided script in the correct project. To get the project's resource name:

In the {productname-short} dashboard, select *Data Science Projects* and then hover your cursor over the *?* icon next to the project name. A text box appears with information about the project, including it's resource name:
In the {productname-short} dashboard, select *Data Science Projects* and then click the *?* icon next to the project name. A text box appears with information about the project, including its resource name:

image::projects/ds-project-list-resource-hover.png[Project list resource name]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,21 +23,22 @@ image::projects/ds-project-new-form.png[New data science project form]

.Verification

You can now see its initial state. There are five types of project components:
You can now see its initial state. Individual tabs provide more information about the project components and project access permissions:

image::projects/ds-project-new.png[New data science project]

** *Workbenches* are instances of your development and experimentation environment. They typically contain IDEs, such as JupyterLab, RStudio, and Visual Studio Code.

** A *Cluster storage* is a volume that persists the files and data you're working on within a workbench. A workbench has access to one or more cluster storage instances.
** *Pipelines* contain the data science pipelines that are executed within the project.

** *Data connections* contain configuration parameters that are required to connect to a data source, such as an S3 object bucket.
** *Models* allow you to quickly serve a trained model for real-time inference. You can have multiple model servers per data science project. One model server can host multiple models.

** *Cluster storage* is a persistent volume that retains the files and data you're working on within a workbench. A workbench has access to one or more cluster storage instances.

** *Pipelines* contain the Data Science pipelines that are executed within the project.
** *Data connections* contain configuration parameters that are required to connect to a data source, such as an S3 object bucket.

** *Models and model servers* allow you to quickly serve a trained model for real-time inference. You can have multiple model servers per data science project. One model server can host multiple models.
** *Permissions* define which users and groups can access the project.

.Next step

xref:storing-data-with-data-connections.adoc[Storing data with data connections]

3 changes: 1 addition & 2 deletions workshop/docs/modules/ROOT/pages/testing-the-model-api.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Now that you've deployed the model, you can test its API endpoints.

.Procedure

. In the {productname-short} dashboard, navigate to the project details page and scroll down to the *Models and model servers* section.
. In the {productname-short} dashboard, navigate to the project details page and click the *Models* tab.

. Take note of the model's Inference endpoint. You need this information when you test the model API.
+
Expand All @@ -24,4 +24,3 @@ If you deployed your model with single-model serving, follow the directions in `
xref:automating-workflows-with-pipelines.adoc[Automating workflows with data science pipelines]

xref:running-a-pipeline-generated-from-python-code.adoc[Running a data science pipeline generated from Python code]

0 comments on commit ccfaa95

Please sign in to comment.