From 391d8839765b0a416fbdc188cb6710f87646654f Mon Sep 17 00:00:00 2001 From: Breda McColgan Date: Thu, 2 May 2024 20:07:40 +0100 Subject: [PATCH 1/2] ENG-6697: Fixes broken links, prereqs, YAML refs, and adds note about self-signed certs --- ...-management-for-distributed-workloads.adoc | 7 ++++ .../configuring-the-codeflare-operator.adoc | 9 +++++- ...-the-distributed-workloads-components.adoc | 11 ++----- ...a-science-workloads-from-ds-pipelines.adoc | 32 +++++++++++-------- ...data-science-workloads-from-notebooks.adoc | 26 +++++++-------- working-with-distributed-workloads.adoc | 3 +- 6 files changed, 51 insertions(+), 37 deletions(-) diff --git a/modules/configuring-quota-management-for-distributed-workloads.adoc b/modules/configuring-quota-management-for-distributed-workloads.adoc index 21dc0de5..a1fec600 100644 --- a/modules/configuring-quota-management-for-distributed-workloads.adoc +++ b/modules/configuring-quota-management-for-distributed-workloads.adoc @@ -21,6 +21,13 @@ ifdef::cloud-service[] * You have downloaded and installed the OpenShift command-line interface (CLI). See link:https://docs.openshift.com/dedicated/cli_reference/openshift_cli/getting-started-cli.html#installing-openshift-cli[Installing the OpenShift CLI] (Red Hat OpenShift Dedicated) or link:https://docs.openshift.com/rosa/cli_reference/openshift_cli/getting-started-cli.html#installing-openshift-cli[Installing the OpenShift CLI] (Red Hat OpenShift Service on AWS). endif::[] +ifndef::upstream[] +* You have enabled the required distributed workloads components as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-the-distributed-workloads-components_distributed-workloads[Configuring the distributed workloads components]. +endif::[] +ifdef::upstream[] +* You have enabled the required distributed workloads components as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-the-distributed-workloads-components_distributed-workloads[Configuring the distributed workloads components]. +endif::[] + * You have sufficient resources. In addition to the base {productname-short} resources, you need 1.6 vCPU and 2 GiB memory to deploy the distributed workloads infrastructure. * The resources are physically available in the cluster. diff --git a/modules/configuring-the-codeflare-operator.adoc b/modules/configuring-the-codeflare-operator.adoc index 5362a5c1..0164377f 100644 --- a/modules/configuring-the-codeflare-operator.adoc +++ b/modules/configuring-the-codeflare-operator.adoc @@ -14,6 +14,13 @@ ifdef::cloud-service[] * You have logged in to OpenShift with the `cluster-admin` role. endif::[] +ifndef::upstream[] +* You have enabled the required distributed workloads components as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-the-distributed-workloads-components_distributed-workloads[Configuring the distributed workloads components]. +endif::[] +ifdef::upstream[] +* You have enabled the required distributed workloads components as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-the-distributed-workloads-components_distributed-workloads[Configuring the distributed workloads components]. +endif::[] + .Procedure ifdef::upstream,self-managed[] @@ -33,7 +40,7 @@ endif::[] . Search for the *codeflare-operator-config* config map, and click the config map name to open the *ConfigMap details* page. . Click the *YAML* tab to show the config map specifications. -. In the `data` > `config.yaml` > `kuberay` section, you can edit the following entries: +. In the `data:config.yaml:kuberay` section, you can edit the following entries: + ingressDomain:: This configuration option is null (`ingressDomain: ""`) by default. diff --git a/modules/configuring-the-distributed-workloads-components.adoc b/modules/configuring-the-distributed-workloads-components.adoc index 25c1a06b..68119ff0 100644 --- a/modules/configuring-the-distributed-workloads-components.adoc +++ b/modules/configuring-the-distributed-workloads-components.adoc @@ -38,13 +38,6 @@ Instead, users must configure the Ray job specification to set `submissionMode=H * You have access to the data sets and models that the distributed workload uses. * You have access to the Python dependencies for the distributed workload. -ifndef::upstream[] -* You have created the required Kueue resources as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-quota-management-for-distributed-workloads_distributed_workloads[Configuring quota management for distributed workloads]. -endif::[] -ifdef::upstream[] -* You have created the required Kueue resources as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-quota-management-for-distributed-workloads_distributed_workloads[Configuring quota management for distributed workloads]. -endif::[] - ifndef::upstream[] * You have removed any previously installed instances of the CodeFlare Operator, as described in the Knowledgebase solution link:https://access.redhat.com/solutions/7043796[How to migrate from a separately installed CodeFlare Operator in your data science cluster]. endif::[] @@ -138,7 +131,9 @@ endif::[] . Click the *Data Science Cluster* tab. . Click the default instance name (for example, *default-dsc*) to open the instance details page. . Click the *YAML* tab to show the instance specifications. -. In the `spec.components` section, ensure that the `managementState` field is set correctly for the required components depending on whether the distributed workload is run from a pipeline or notebook or both, as shown in the following table. +. Enable the required distributed workloads components. +In the `spec:components` section, set the `managementState` field correctly for the required components. +The list of required components depends on whether the distributed workload is run from a pipeline or notebook or both, as shown in the following table. + .Components required for distributed workloads [cols="34,20,20,26"] diff --git a/modules/running-distributed-data-science-workloads-from-ds-pipelines.adoc b/modules/running-distributed-data-science-workloads-from-ds-pipelines.adoc index 38c907d7..a32054fa 100644 --- a/modules/running-distributed-data-science-workloads-from-ds-pipelines.adoc +++ b/modules/running-distributed-data-science-workloads-from-ds-pipelines.adoc @@ -1,6 +1,6 @@ :_module-type: PROCEDURE -[id="running-distributed-data-science-workloads-from-ds-pipeline_{context}"] +[id="running-distributed-data-science-workloads-from-ds-pipelines_{context}"] = Running distributed data science workloads from data science pipelines [role='_abstract'] @@ -15,14 +15,21 @@ ifdef::cloud-service[] endif::[] ifndef::upstream[] -* You have created the required Kueue resources as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-quota-management-for-distributed-workloads_distributed_workloads[Configuring quota management for distributed workloads]. +* You have access to a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-distributed-workloads_distributed-workloads[Configuring distributed workloads]. endif::[] ifdef::upstream[] -* You have created the required Kueue resources as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-quota-management-for-distributed-workloads_distributed_workloads[Configuring quota management for distributed workloads]. +* You have access to a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-distributed-workloads_distributed-workloads[Configuring distributed workloads]. endif::[] ifndef::upstream[] -* Optional: You have defined a _default_ local queue for the Ray cluster by creating a `LocalQueue` resource and adding the following annotation to the configuration details for that `LocalQueue` resource, as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-quota-management-for-distributed-workloads_distributed_workloads[Configuring quota management for distributed workloads]: +* You have created the required Kueue resources as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads]. +endif::[] +ifdef::upstream[] +* You have created the required Kueue resources as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads]. +endif::[] + +ifndef::upstream[] +* Optional: You have defined a _default_ local queue for the Ray cluster by creating a `LocalQueue` resource and adding the following annotation to the configuration details for that `LocalQueue` resource, as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads]: + [source,bash] ---- @@ -35,7 +42,7 @@ If you do not create a default local queue, you must specify a local queue in ea ==== endif::[] ifdef::upstream[] -* Optional: You have defined a _default_ local queue for the Ray cluster by creating a `LocalQueue` resource and adding the following annotation to the configuration details for that `LocalQueue` resource, as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-quota-management-for-distributed-workloads_distributed_workloads[Configuring quota management for distributed workloads]: +* Optional: You have defined a _default_ local queue for the Ray cluster by creating a `LocalQueue` resource and adding the following annotation to the configuration details for that `LocalQueue` resource, as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads]: + [source,bash] ---- @@ -48,13 +55,6 @@ If you do not create a default local queue, you must specify a local queue in ea ==== endif::[] -ifndef::upstream[] -* You have access to a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-distributed-workloads_distributed-workloads[Configuring distributed workloads]. -endif::[] -ifdef::upstream[] -* You have access to a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-distributed-workloads_distributed-workloads[Configuring distributed workloads]. -endif::[] - * You have access to S3-compatible object storage. * You have logged in to {productname-long}. * You have created a data science project. @@ -175,6 +175,10 @@ if __name__ == '__main__': <2> Authenticates with the cluster by using values that you specify when creating the pipeline run // Commenting out second part of callout 2 until RHOAIENG-880 is fixed //; you can omit this section if the Ray cluster is configured to use the same namespace as the data science project +[NOTE] +---- +If your cluster uses self-signed certificates, include `ca-cert-path=____` in the `TokenAuthentication` parameter list, where `____` is the path to the cluster-wide Certificate Authority (CA) bundle that contains the self-signed certificates. +---- <3> Specifies the Ray cluster configuration: replace these example values with the values for your Ray cluster <4> Specifies the location of the Ray cluster image: if using a disconnected environment, replace the default value with the location for your environment <5> Specifies the local queue to which the Ray cluster will be submitted: you can omit this line if you configured a default local queue @@ -209,10 +213,10 @@ ifdef::upstream[] endif::[] ifndef::upstream[] -. When the pipeline run is complete, confirm that it is included in the list of triggered pipeline runs, as described in link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/working-with-data-science-pipelines_ds-pipelines#viewing-triggered-pipeline-runs_ds-pipelines[Viewing triggered pipeline runs]. +. When the pipeline run is complete, confirm that it is included in the list of triggered pipeline runs, as described in link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/working-with-data-science-pipelines_ds-pipelines#viewing-the-details-of-a-pipeline-run_ds-pipelines[Viewing the details of a pipeline run]. endif::[] ifdef::upstream[] -. When the pipeline run is complete, confirm that it is included in the list of triggered pipeline runs, as described in link:{odhdocshome}/working_on_data_science_projects/#viewing-triggered-pipeline-runs_ds-pipelines[Viewing triggered pipeline runs]. +. When the pipeline run is complete, confirm that it is included in the list of triggered pipeline runs, as described in link:{odhdocshome}/working_on_data_science_projects/#viewing-the-details-of-a-pipeline-run_ds-pipelines[Viewing the details of a pipeline run]. endif::[] diff --git a/modules/running-distributed-data-science-workloads-from-notebooks.adoc b/modules/running-distributed-data-science-workloads-from-notebooks.adoc index 63554394..eca27372 100644 --- a/modules/running-distributed-data-science-workloads-from-notebooks.adoc +++ b/modules/running-distributed-data-science-workloads-from-notebooks.adoc @@ -8,14 +8,21 @@ To run a distributed data science workload from a notebook, you must first provi .Prerequisites ifndef::upstream[] -* You have created the required Kueue resources as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-quota-management-for-distributed-workloads_distributed_workloads[Configuring quota management for distributed workloads]. +* You have access to a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-distributed-workloads_distributed-workloads[Configuring distributed workloads]. +endif::[] +ifdef::upstream[] +* You have access to a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-distributed-workloads_distributed-workloads[Configuring distributed workloads]. +endif::[] + +ifndef::upstream[] +* You have created the required Kueue resources as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads]. endif::[] ifdef::upstream[] -* You have created the required Kueue resources as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-quota-management-for-distributed-workloads_distributed_workloads[Configuring quota management for distributed workloads]. +* You have created the required Kueue resources as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads]. endif::[] ifndef::upstream[] -* Optional: You have defined a _default_ local queue for the Ray cluster by creating a `LocalQueue` resource and adding the following annotation to its configuration details, as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-quota-management-for-distributed-workloads_distributed_workloads[Configuring quota management for distributed workloads]: +* Optional: You have defined a _default_ local queue for the Ray cluster by creating a `LocalQueue` resource and adding the following annotation to its configuration details, as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads]: + [source,bash] ---- @@ -28,7 +35,7 @@ If you do not create a default local queue, you must specify a local queue in ea ==== endif::[] ifdef::upstream[] -* Optional: You have defined a _default_ local queue for the Ray cluster by creating a `LocalQueue` resource and adding the following annotation to its configuration details, as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-quota-management-for-distributed-workloads_distributed_workloads[Configuring quota management for distributed workloads]: +* Optional: You have defined a _default_ local queue for the Ray cluster by creating a `LocalQueue` resource and adding the following annotation to its configuration details, as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads]: + [source,bash] ---- @@ -41,13 +48,6 @@ If you do not create a default local queue, you must specify a local queue in ea ==== endif::[] -ifndef::upstream[] -* You have access to a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-distributed-workloads_distributed-workloads[Configuring distributed workloads]. -endif::[] -ifdef::upstream[] -* You have access to a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-distributed-workloads_distributed-workloads[Configuring distributed workloads]. -endif::[] - ifndef::upstream[] * You have created a data science project that contains a workbench that is running one of the default notebook images, for example, the *Standard Data Science* notebook. See the table in link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/creating-and-importing-notebooks_notebooks#notebook-images-for-data-scientists_notebooks[Notebook images for data scientists] for a complete list of default notebook images. @@ -100,7 +100,7 @@ You must include the Ray cluster authentication code to enable the Ray client th ifndef::upstream[] -** If you have not configured a default local queue by including the `kueue.x-k8s.io/default-queue: 'true'` annotation as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-quota-management-for-distributed-workloads_distributed_workloads[Configuring quota management for distributed workloads], update the `ClusterConfiguration` section to specify the local queue for the Ray cluster, as shown in the following example: +** If you have not configured a default local queue by including the `kueue.x-k8s.io/default-queue: 'true'` annotation as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads], update the `ClusterConfiguration` section to specify the local queue for the Ray cluster, as shown in the following example: + .Example local queue assignment [source,bash,subs="+quotes"] @@ -109,7 +109,7 @@ local_queue="_your_local_queue_name_" ---- endif::[] ifdef::upstream[] -** If you have not configured a default local queue by including the `kueue.x-k8s.io/default-queue: 'true'` annotation as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-quota-management-for-distributed-workloads_distributed_workloads[Configuring quota management for distributed workloads], update the `ClusterConfiguration` section to specify the local queue for the Ray cluster, as shown in the following example: +** If you have not configured a default local queue by including the `kueue.x-k8s.io/default-queue: 'true'` annotation as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads], update the `ClusterConfiguration` section to specify the local queue for the Ray cluster, as shown in the following example: + .Example local queue assignment [source,bash] diff --git a/working-with-distributed-workloads.adoc b/working-with-distributed-workloads.adoc index e8044bbe..00b2e0fb 100644 --- a/working-with-distributed-workloads.adoc +++ b/working-with-distributed-workloads.adoc @@ -25,8 +25,9 @@ This approach significantly reduces the task completion time, and enables the us include::modules/overview-of-distributed-workloads.adoc[leveloffset=+1] include::modules/configuring-distributed-workloads.adoc[leveloffset=+1] -include::modules/configuring-quota-management-for-distributed-workloads.adoc[leveloffset=+2] include::modules/configuring-the-distributed-workloads-components.adoc[leveloffset=+2] +include::modules/configuring-quota-management-for-distributed-workloads.adoc[leveloffset=+2] + //include::modules/configuring-the-codeflare-operator.adoc[leveloffset=+2] From 97d5cfb4d52c3aa8b4e9fd91c88c6ff54613f811 Mon Sep 17 00:00:00 2001 From: Breda McColgan Date: Thu, 2 May 2024 20:13:35 +0100 Subject: [PATCH 2/2] ENG-6697: Updates links to remove assembly --- ...guring-quota-management-for-distributed-workloads.adoc | 2 +- modules/configuring-the-codeflare-operator.adoc | 2 +- ...stributed-data-science-workloads-disconnected-env.adoc | 4 ++-- ...tributed-data-science-workloads-from-ds-pipelines.adoc | 6 +++--- ...distributed-data-science-workloads-from-notebooks.adoc | 8 ++++---- 5 files changed, 11 insertions(+), 11 deletions(-) diff --git a/modules/configuring-quota-management-for-distributed-workloads.adoc b/modules/configuring-quota-management-for-distributed-workloads.adoc index a1fec600..0ea32da4 100644 --- a/modules/configuring-quota-management-for-distributed-workloads.adoc +++ b/modules/configuring-quota-management-for-distributed-workloads.adoc @@ -22,7 +22,7 @@ ifdef::cloud-service[] endif::[] ifndef::upstream[] -* You have enabled the required distributed workloads components as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-the-distributed-workloads-components_distributed-workloads[Configuring the distributed workloads components]. +* You have enabled the required distributed workloads components as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/#configuring-the-distributed-workloads-components_distributed-workloads[Configuring the distributed workloads components]. endif::[] ifdef::upstream[] * You have enabled the required distributed workloads components as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-the-distributed-workloads-components_distributed-workloads[Configuring the distributed workloads components]. diff --git a/modules/configuring-the-codeflare-operator.adoc b/modules/configuring-the-codeflare-operator.adoc index 0164377f..64a150be 100644 --- a/modules/configuring-the-codeflare-operator.adoc +++ b/modules/configuring-the-codeflare-operator.adoc @@ -15,7 +15,7 @@ ifdef::cloud-service[] endif::[] ifndef::upstream[] -* You have enabled the required distributed workloads components as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-the-distributed-workloads-components_distributed-workloads[Configuring the distributed workloads components]. +* You have enabled the required distributed workloads components as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/#configuring-the-distributed-workloads-components_distributed-workloads[Configuring the distributed workloads components]. endif::[] ifdef::upstream[] * You have enabled the required distributed workloads components as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-the-distributed-workloads-components_distributed-workloads[Configuring the distributed workloads components]. diff --git a/modules/running-distributed-data-science-workloads-disconnected-env.adoc b/modules/running-distributed-data-science-workloads-disconnected-env.adoc index ed264ac4..606ac7b7 100644 --- a/modules/running-distributed-data-science-workloads-disconnected-env.adoc +++ b/modules/running-distributed-data-science-workloads-disconnected-env.adoc @@ -18,7 +18,7 @@ To run a distributed data science workload in a disconnected environment, you mu * You have created a data science project. .Procedure -. Configure the disconnected data science cluster to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-distributed-workloads_distributed-workloads[Configuring distributed workloads]. +. Configure the disconnected data science cluster to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/#configuring-distributed-workloads_distributed-workloads[Configuring distributed workloads]. . In the `ClusterConfiguration` section of the notebook or pipeline, ensure that the `image` value specifies a Ray cluster image that can be accessed from the disconnected environment: * Notebooks use the Ray cluster image to create a Ray cluster when running the notebook. * Pipelines use the Ray cluster image to create a Ray cluster during the pipeline run. @@ -33,7 +33,7 @@ PIP_TRUSTED_HOST: pypi-notebook.apps.mylocation.com where * `PIP_INDEX_URL` specifies the base URL of your private PyPI server (the default value is https://pypi.org). * `PIP_TRUSTED_HOST` configures Python to mark the specified host as trusted, regardless of whether that host has a valid SSL certificate or is using a secure channel. -. Run the distributed data science workload, as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#running-distributed-data-science-workloads-from-notebooks_distributed-workloads[Running distributed data science workloads from notebooks] or link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#running-distributed-data-science-workloads-from-ds-pipelines_distributed-workloads[Running distributed data science workloads from data science pipelines]. +. Run the distributed data science workload, as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/#running-distributed-data-science-workloads-from-notebooks_distributed-workloads[Running distributed data science workloads from notebooks] or link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/#running-distributed-data-science-workloads-from-ds-pipelines_distributed-workloads[Running distributed data science workloads from data science pipelines]. .Verification The notebook or pipeline run completes without errors: diff --git a/modules/running-distributed-data-science-workloads-from-ds-pipelines.adoc b/modules/running-distributed-data-science-workloads-from-ds-pipelines.adoc index a32054fa..92825459 100644 --- a/modules/running-distributed-data-science-workloads-from-ds-pipelines.adoc +++ b/modules/running-distributed-data-science-workloads-from-ds-pipelines.adoc @@ -15,21 +15,21 @@ ifdef::cloud-service[] endif::[] ifndef::upstream[] -* You have access to a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-distributed-workloads_distributed-workloads[Configuring distributed workloads]. +* You have access to a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/#configuring-distributed-workloads_distributed-workloads[Configuring distributed workloads]. endif::[] ifdef::upstream[] * You have access to a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-distributed-workloads_distributed-workloads[Configuring distributed workloads]. endif::[] ifndef::upstream[] -* You have created the required Kueue resources as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads]. +* You have created the required Kueue resources as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads]. endif::[] ifdef::upstream[] * You have created the required Kueue resources as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads]. endif::[] ifndef::upstream[] -* Optional: You have defined a _default_ local queue for the Ray cluster by creating a `LocalQueue` resource and adding the following annotation to the configuration details for that `LocalQueue` resource, as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads]: +* Optional: You have defined a _default_ local queue for the Ray cluster by creating a `LocalQueue` resource and adding the following annotation to the configuration details for that `LocalQueue` resource, as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads]: + [source,bash] ---- diff --git a/modules/running-distributed-data-science-workloads-from-notebooks.adoc b/modules/running-distributed-data-science-workloads-from-notebooks.adoc index eca27372..c9c70b03 100644 --- a/modules/running-distributed-data-science-workloads-from-notebooks.adoc +++ b/modules/running-distributed-data-science-workloads-from-notebooks.adoc @@ -8,21 +8,21 @@ To run a distributed data science workload from a notebook, you must first provi .Prerequisites ifndef::upstream[] -* You have access to a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-distributed-workloads_distributed-workloads[Configuring distributed workloads]. +* You have access to a data science cluster that is configured to run distributed workloads as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/#configuring-distributed-workloads_distributed-workloads[Configuring distributed workloads]. endif::[] ifdef::upstream[] * You have access to a data science cluster that is configured to run distributed workloads as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-distributed-workloads_distributed-workloads[Configuring distributed workloads]. endif::[] ifndef::upstream[] -* You have created the required Kueue resources as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads]. +* You have created the required Kueue resources as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads]. endif::[] ifdef::upstream[] * You have created the required Kueue resources as described in link:{odhdocshome}/working_with_distributed_workloads/#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads]. endif::[] ifndef::upstream[] -* Optional: You have defined a _default_ local queue for the Ray cluster by creating a `LocalQueue` resource and adding the following annotation to its configuration details, as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads]: +* Optional: You have defined a _default_ local queue for the Ray cluster by creating a `LocalQueue` resource and adding the following annotation to its configuration details, as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads]: + [source,bash] ---- @@ -100,7 +100,7 @@ You must include the Ray cluster authentication code to enable the Ray client th ifndef::upstream[] -** If you have not configured a default local queue by including the `kueue.x-k8s.io/default-queue: 'true'` annotation as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/working-with-distributed-workloads_distributed-workloads#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads], update the `ClusterConfiguration` section to specify the local queue for the Ray cluster, as shown in the following example: +** If you have not configured a default local queue by including the `kueue.x-k8s.io/default-queue: 'true'` annotation as described in link:{rhoaidocshome}{default-format-url}/working_with_distributed_workloads/#configuring-quota-management-for-distributed-workloads_distributed-workloads[Configuring quota management for distributed workloads], update the `ClusterConfiguration` section to specify the local queue for the Ray cluster, as shown in the following example: + .Example local queue assignment [source,bash,subs="+quotes"]