Skip to content

Commit

Permalink
rhoai-5279 text cleanup (#34)
Browse files Browse the repository at this point in the history
* rhoai-5279 text cleanup

* rhoai-5279 small fix

* rhoai-5279 updates re: new Home page

* rhoai-5279 peer review
  • Loading branch information
MelissaFlinn authored Jun 25, 2024
1 parent 6cc58c1 commit 254b728
Show file tree
Hide file tree
Showing 17 changed files with 42 additions and 35 deletions.
1 change: 1 addition & 0 deletions workshop/antora-playbook.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ asciidoc:
deliverable: workshop
productname-long: Red Hat OpenShift AI
productname-short: OpenShift AI
org-name: Red Hat
extensions:
- ./lib/tab-block.js
- ./lib/remote-include-processor.js
Expand Down
Binary file not shown.
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions workshop/docs/modules/ROOT/pages/_attributes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,5 @@
//:deliverable: tutorial
:productname-long: Red Hat OpenShift AI
:productname-short: OpenShift AI
:org-name: Red Hat
:version: 2.9
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ In this section, you create a simple pipeline by using the GUI pipeline editor.

Your completed pipeline should look like the one in the `6 Train Save.pipeline` file.

Note: You can run and use `6 Train Save.pipeline`. To explore the pipeline editor, complete the steps in the following procedure to create your own pipeline.
To explore the pipeline editor, complete the steps in the following procedure to create your own pipeline. Alternately, you can skip the following procedure and instead run the `6 Train Save.pipeline` file.

== Create a pipeline

Expand Down
2 changes: 1 addition & 1 deletion workshop/docs/modules/ROOT/pages/creating-a-workbench.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ image::workbenches/ds-project-create-workbench.png[Create workbench button]
+
image::workbenches/create-workbench-form-name-desc.png[Workbench name and description, 600]
+
Red Hat provides several supported notebook images. In the *Notebook image* section, you can choose one of these images or any custom images that an administrator has set up for you. The *Tensorflow* image has the libraries needed for this {deliverable}.
{org-name} provides several supported notebook images. In the *Notebook image* section, you can choose one of these images or any custom images that an administrator has set up for you. The *Tensorflow* image has the libraries needed for this {deliverable}.

. Select the latest *Tensorflow* image.
+
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@
. In the {productname-short} dashboard, navigate to the project details page and click the *Models* tab.
+
image::model-serving/ds-project-model-list-add.png[Models]
+
*Note:* Depending on how model serving has been configured on your cluster, you might see only one model serving platform option.

. In the *Multi-model serving platform* tile, click *Add model server*.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@

{productname-short} single-model servers host only one model. You create a new model server and deploy your model to it.

*Note:* Depending on how model serving has been configured on your cluster, you might see only one model serving platform option.


.Prerequiste

Expand All @@ -13,6 +15,8 @@
. In the {productname-short} dashboard, navigate to the project details page and click the *Models* tab.
+
image::model-serving/ds-project-model-list-add.png[Models]
+
*Note:* Depending on how model serving has been configured on your cluster, you might see only one model serving platform option.

. In the *Single-model serving platform* tile, click *Deploy model*.
. In the form, provide the following values:
Expand Down
2 changes: 1 addition & 1 deletion workshop/docs/modules/ROOT/pages/deploying-a-model.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Now that the model is accessible in storage and saved in the portable ONNX forma
{productname-short} offers two options for model serving:

* *Single-model serving* - Each model in the project is deployed on its own model server. This platform works well for large models or models that need dedicated resources.
* *Multi-model serving* - All models in the project are deployed on the same model server. This platform is suitable for sharing resources amongst deployed models.
* *Multi-model serving* - All models in the project are deployed on the same model server. This platform is suitable for sharing resources amongst deployed models. Multi-model serving is the only option offered in the {org-name} Developer Sandbox environment.

*Note:* For each project, you can specify only one model serving platform. If you want to change to the other model serving platform, you must create a new project.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,11 +30,17 @@ image::projects/ds-project-create-pipeline-server-form.png[Selecting the Pipelin
. Click *Configure pipeline server*.

. Wait until the spinner disappears and *No pipelines yet* is displayed.

+
[IMPORTANT]
====
You must wait until the pipeline configuration is complete before you continue and create your workbench. If you create your workbench before the pipeline server is ready, your workbench will not be able to submit pipelines to it.
====
+
If you have waited more than 5 minutes, and the pipeline server configuration does not complete, you can try to delete the pipeline server and create it again.
+
image::projects//ds-project-delete-pipeline-server.png[Delete pipeline server, 300]
+
You can also ask your {productname-short} administrator to verify that self-signed certificates are added to your cluster as described in link:{rhoaidocshome}{default-format-url}/installing_and_uninstalling_{url-productname-short}/working-with-certificates_certs[Working with certificates].

.Verification

Expand All @@ -45,16 +51,6 @@ image::projects/ds-project-pipeline-server-view.png[View pipeline server configu
+
An information box opens and displays the object storage connection information for the pipeline server.


[NOTE]
====
If you have waited more than 5 minutes, and the pipeline server configuration does not complete, you can try to delete the pipeline server and create it again.
image::projects//ds-project-delete-pipeline-server.png[Delete pipeline server, 300]
You can also ask your {productname-short} administrator to verify that self-signed certificates are added to your cluster as described in https://access.redhat.com/documentation/en-us/red_hat_openshift_ai_self-managed/{version}/html/installing_and_uninstalling_openshift_ai_self-managed/working-with-certificates_certs[Working with certificates].
====

.Next step

xref:creating-a-workbench.adoc[Creating a workbench and selecting a notebook image]
Expand Down
2 changes: 1 addition & 1 deletion workshop/docs/modules/ROOT/pages/index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ Based on this data, the model outputs the likelihood of the transaction being fr

== Before you begin

If you don't already have an instance of {productname-long}, see the https://developers.redhat.com/products/red-hat-openshift-ai/download[{productname-long} page on the Red Hat Developer website] for information on setting up your environment. There, you can create an account and access the *free {productname-short} Sandbox* or you can learn how to install {productname-short} on *your own OpenShift cluster*.
If you don't already have an instance of {productname-long}, see the https://developers.redhat.com/products/red-hat-openshift-ai/download[{productname-long} page on the {org-name} Developer website]. There, you can create an account and access the *free {org-name} Developer Sandbox* or you can learn how to install {productname-short} on *your own OpenShift cluster*.

[IMPORTANT]
====
Expand Down
24 changes: 15 additions & 9 deletions workshop/docs/modules/ROOT/pages/navigating-to-the-dashboard.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,22 +3,28 @@

.Procedure

. After you log in to the OpenShift console, access the {productname-short} dashboard by clicking the application launcher icon on the header.
+
image::projects/ocp-console-ds-tile.png[{productname-short} dashboard link]
. How you open the {productname-short} dashboard depends on your OpenShift environment:

. When prompted, log in to the {productname-short} dashboard by using your OpenShift credentials. {productname-short} uses the same credentials as OpenShift for the dashboard, notebooks, and all other components.
** *If you are using the {org-name} Developer Sandbox*:
+
image::projects/login-with-openshift.png[OpenShift login, 300]
After you log in to the Sandbox, under *Available services*, in the {productname-long} card, click *Launch*.
+
The {productname-short} dashboard shows the status of any installed and enabled applications.
image::projects/sandbox-rhoai-tile.png[{productname-short} dashboard link]

. Optionally, click *Explore* to view other available application integrations.
** *If you are using your own OpenShift cluster*:
+
image::projects/dashboard-explore.png[Dashboard enabled]
.. After you log in to the OpenShift console, click the application launcher icon on the header.
+
Note: You can navigate back to the OpenShift console in a similar fashion. Click the application launcher to access the OpenShift console.
image::projects/ocp-console-ds-tile.png[{productname-short} dashboard link]

.. When prompted, log in to the {productname-short} dashboard by using your OpenShift credentials. {productname-short} uses the same credentials as OpenShift for the dashboard, notebooks, and all other components.
+
image::projects/login-with-openshift.png[OpenShift login, 300]

The {productname-short} dashboard shows the *Home* page.

*Note:* You can navigate back to the OpenShift console by clicking the application launcher to access the OpenShift console.

image::projects/ds-console-ocp-tile.png[OCP console link]

For now, stay in the {productname-short} dashboard.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -109,8 +109,7 @@ You should see a "Resources successfully created" message and the following reso

* `demo-setup`
* `demo-setup-edit`
* `create s3-storage`

* `create-s3-storage`

.Next steps

Expand Down
Original file line number Diff line number Diff line change
@@ -1,29 +1,27 @@
[id='setting-up-your-data-science-project']
= Setting up your data science project

Before you begin, make sure that you are logged in to *{productname-long}* and that you can see the dashboard:

image::projects/dashboard-enabled.png[Dashboard Enabled]

Note that you can start a Jupyter notebook from here, but it would be a one-off notebook run in isolation. To implement a data science workflow, you must create a data science project. Projects allow you and your team to organize and collaborate on resources within separated namespaces. From a project you can create multiple workbenches, each with their own Jupyter notebook environment, and each with their own data connections and cluster storage. In addition, the workbenches can share models and data with pipelines and model servers.
Before you begin, make sure that you are logged in to *{productname-long}*.

.Procedure

. On the navigation menu, select *Data Science Projects*. This page lists any existing projects that you have access to. From this page, you can select an existing project (if any) or create a new one.
+
image::projects/dashboard-click-projects.png[Data Science Projects List]
image::projects/launch-jupyter-link.png[Launch Jupyter link]
+
If you already have an active project that want to use, select it now and skip ahead to the next section, xref:storing-data-with-data-connections.adoc[Storing data with data connections]. Otherwise, continue to the next step.
Note that it is possible to start a Jupyter notebook by clicking the *Launch Jupyter* link. However, it would be a one-off Jupyter notebook run in isolation. To implement a data science workflow, you must create a data science project (as described in the following procedure). Projects allow you and your team to organize and collaborate on resources within separated namespaces. From a project you can create multiple workbenches, each with their own IDE environment (for example, JupyterLab), and each with their own data connections and cluster storage. In addition, the workbenches can share models and data with pipelines and model servers.

. Click *Create data science project*.
. If you are using the {org-name} Developer Sandbox, you are provided with a default data science project (for example, `myname-dev`). Select it and skip over the next step to the *Verification* section.
+
If you are using your own OpenShift cluster, click *Create data science project*.

. Enter a display name and description. Based on the display name, a resource name is automatically generated, but you can change if you prefer.
+
image::projects/ds-project-new-form.png[New data science project form]

.Verification

You can now see its initial state. Individual tabs provide more information about the project components and project access permissions:
You can see your project's initial state. Individual tabs provide more information about the project components and project access permissions:

image::projects/ds-project-new.png[New data science project]

Expand Down

0 comments on commit 254b728

Please sign in to comment.