Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

odh-8304 update projects guide #360

Merged
merged 4 commits into from
Jul 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions assemblies/configuring-cluster-storage.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
:_module-type: ASSEMBLY

ifdef::context[:parent-context: {context}]

[id="configuring-cluster-storage_{context}"]
= Configuring cluster storage

include::modules/adding-cluster-storage-to-your-data-science-project.adoc[leveloffset=+2]

include::modules/updating-cluster-storage.adoc[leveloffset=+2]

include::modules/deleting-cluster-storage-from-a-data-science-project.adoc[leveloffset=+2]


ifdef::parent-context[:context: {parent-context}]
ifndef::parent-context[:!context:]
18 changes: 18 additions & 0 deletions assemblies/managing-access-to-data-science-projects.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
:_module-type: ASSEMBLY

ifdef::context[:parent-context: {context}]

[id="managing-access-to-data-science-projects_{context}"]
= Managing access to data science projects

include::modules/configuring-access-to-a-data-science-project.adoc[leveloffset=+1]

include::modules/sharing-access-to-a-data-science-project.adoc[leveloffset=+1]

include::modules/updating-access-to-a-data-science-project.adoc[leveloffset=+1]

include::modules/removing-access-to-a-data-science-project.adoc[leveloffset=+1]


ifdef::parent-context[:context: {parent-context}]
ifndef::parent-context[:!context:]
16 changes: 16 additions & 0 deletions assemblies/using-data-connections.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
:_module-type: ASSEMBLY

ifdef::context[:parent-context: {context}]

[id="using-data-connections_{context}"]
= Using data connections
MelissaFlinn marked this conversation as resolved.
Show resolved Hide resolved

include::modules/adding-a-data-connection-to-your-data-science-project.adoc[leveloffset=+1]

include::modules/deleting-a-data-connection.adoc[leveloffset=+1]

include::modules/updating-a-connected-data-source.adoc[leveloffset=+1]


ifdef::parent-context[:context: {parent-context}]
ifndef::parent-context[:!context:]
17 changes: 17 additions & 0 deletions assemblies/using-data-science-projects.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
:_module-type: ASSEMBLY

ifdef::context[:parent-context: {context}]

[id="using-data-science-projects_{context}"]
= Using data science projects


include::modules/creating-a-data-science-project.adoc[leveloffset=+1]

include::modules/updating-a-data-science-project.adoc[leveloffset=+1]

include::modules/deleting-a-data-science-project.adoc[leveloffset=+1]


ifdef::parent-context[:context: {parent-context}]
ifndef::parent-context[:!context:]
22 changes: 22 additions & 0 deletions assemblies/using-project-workbenches.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
:_module-type: ASSEMBLY

ifdef::context[:parent-context: {context}]

[id="using-project-workbenches_{context}"]
= Using project workbenches

include::modules/creating-a-workbench-select-ide.adoc[leveloffset=+1]

include::modules/about-workbench-images.adoc[leveloffset=+2]

include::modules/creating-a-project-workbench.adoc[leveloffset=+2]

include::modules/starting-a-workbench.adoc[leveloffset=+1]

include::modules/updating-a-project-workbench.adoc[leveloffset=+1]

include::modules/deleting-a-workbench-from-a-data-science-project.adoc[leveloffset=+1]


ifdef::parent-context[:context: {parent-context}]
ifndef::parent-context[:!context:]
81 changes: 0 additions & 81 deletions assemblies/working-on-data-science-projects.adoc

This file was deleted.

4 changes: 2 additions & 2 deletions modules/adding-notebook-pod-tolerations.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,12 @@ This capability is useful if you want to make sure that notebook servers are pla
For existing notebook pods, the toleration key is applied when the notebook pods are restarted.
ifdef::upstream[]
If you are using Jupyter, see link:{odhdocshome}/working-with-connected-applications/#updating-notebook-server-settings-by-restarting-your-server_connected-apps[Updating notebook server settings by restarting your server].
If you are using a workbench in a data science project, see link:{odhdocshome}/working-on-data-science-projects/#_using_project_workbenches[Starting a workbench].
If you are using a workbench in a data science project, see link:{odhdocshome}/working-on-data-science-projects/#starting-a-workbench_projects[Starting a workbench].
endif::[]

ifndef::upstream[]
If you are using Jupyter, see link:{rhoaidocshome}{default-format-url}/working_with_connected_applications/using_the_jupyter_application#updating-notebook-server-settings-by-restarting-your-server_connected-apps[Updating notebook server settings by restarting your server].
If you are using a workbench in a data science project, see link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/working-on-data-science-projects_nb-server#starting-a-workbench_nb-server[Starting a workbench].
If you are using a workbench in a data science project, see link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/using-project-workbenches_projects#starting-a-workbench_projects[Starting a workbench].
endif::[]

.Next step
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
:_module-type: CONCEPT

[id='configuring-access-to-data-science-projects_{context}']
= Configuring access to data science projects
[id='configuring-access-to-a-data-science-project_{context}']
= Configuring access to a data science project

[role='_abstract']
To enable you to work collaboratively on your data science projects with other users, you can share access to your project. After creating your project, you can then set the appropriate access permissions from the {productname-short} user interface.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,10 @@ ifdef::upstream[]
endif::[]

ifndef::upstream[]
* You have created a data science project that contains a workbench, and the workbench is running a default notebook image that contains the CodeFlare SDK, for example, the *Standard Data Science* notebook. For information about how to create a project, see link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/working-on-data-science-projects_nb-server#creating-a-data-science-project_nb-server[Creating a data science project].
* You have created a data science project that contains a workbench, and the workbench is running a default notebook image that contains the CodeFlare SDK, for example, the *Standard Data Science* notebook. For information about how to create a project, see link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/using-data-science-projects_projects#creating-a-data-science-project_projects[Creating a data science project].
endif::[]
ifdef::upstream[]
* You have created a data science project that contains a workbench, and the workbench is running a default notebook image that contains the CodeFlare SDK, for example, the *Standard Data Science* notebook. For information about how to create a project, see link:{odhdocshome}/working-on-data-science-projects/#_using_data_science_projects[Creating a data science project].
* You have created a data science project that contains a workbench, and the workbench is running a default notebook image that contains the CodeFlare SDK, for example, the *Standard Data Science* notebook. For information about how to create a project, see link:{odhdocshome}/working-on-data-science-projects/#creating-a-data-science-project_projects[Creating a data science project].
endif::[]

* You have sufficient resources. In addition to the base {productname-short} resources, you need 1.6 vCPU and 2 GiB memory to deploy the distributed workloads infrastructure.
Expand Down
6 changes: 3 additions & 3 deletions modules/enabling-data-science-pipelines-2.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ To upgrade to DSP 2.0, follow these steps:
. Ensure that your cluster does not have an existing installation of Argo Workflows that is not installed by {productname-short}, and then follow the upgrade steps described in link:{rhoaidocshome}{default-format-url}/upgrading_openshift_ai_cloud_service/index[Upgrading {productname-short} AI Cloud Service].
+
If you upgrade to {productname-short} with DSP 2.0 enabled, and there is an existing installation of Argo Workflows that is not installed by DSP on your cluster, {productname-short} components will not be upgraded. To complete the component upgrade, disable DSP or remove the separate installation of Argo Workflows from your cluster. The component upgrade will then complete automatically.
. Update your workbenches to use the notebook image version 2024.1 or later. For more information, see link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/working-on-data-science-projects_nb-server#updating-a-project-workbench_nb-server[Updating a project workbench].
. Update your workbenches to use the notebook image version 2024.1 or later. For more information, see link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/using-project-workbenches_projects#updating-a-project-workbench_projects[Updating a project workbench].
. Manually migrate your pipelines from DSP 1.0 to 2.0. For more information, see link:{rhoaidocshome}{default-format-url}/working_with_data_science_pipelines#migrating_pipelines_from_dsp_1_0_to_2_0[Migrating pipelines from DSP 1.0 to 2.0].
endif::[]

Expand All @@ -117,7 +117,7 @@ To upgrade to DSP 2.0, follow these steps:
. Ensure that your cluster does not have an existing installation of Argo Workflows that is not installed by {productname-short}, and then follow the upgrade steps described in link:{rhoaidocshome}{default-format-url}/upgrading_openshift_ai_self-managed/index[Upgrading {productname-short} Self-Managed], or for disconnected environments, link:{rhoaidocshome}{default-format-url}/upgrading_openshift_ai_self-managed_in_a_disconnected_environment/index[Upgrading {productname-long} in a disconnected environment].
+
If you upgrade to {productname-short} 2.9 or later with DSP enabled, and there is an existing installation of Argo Workflows that is not installed by DSP on your cluster, {productname-short} components will not be upgraded. To complete the component upgrade, disable DSP or remove the separate installation of Argo Workflows from your cluster. The component upgrade will then complete automatically.
. Update your workbenches to use the notebook image version 2024.1 or later. For more information, see link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/working-on-data-science-projects_nb-server#updating-a-project-workbench_nb-server[Updating a project workbench].
. Update your workbenches to use the notebook image version 2024.1 or later. For more information, see link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/using-project-workbenches_projects#updating-a-project-workbench_projects[Updating a project workbench].
. Manually migrate your pipelines from DSP 1.0 to 2.0. For more information, see link:{rhoaidocshome}{default-format-url}/working_with_data_science_pipelines#migrating_pipelines_from_dsp_1_0_to_2_0[Migrating pipelines from DSP 1.0 to 2.0].
endif::[]
endif::[]
Expand Down Expand Up @@ -171,7 +171,7 @@ Before removing the OpenShift Pipelines Operator, ensure that migration of your

* link:https://pypi.org/project/kfp/[PyPI: kfp^]
* link:https://kubeflow-pipelines.readthedocs.io[Kubeflow Pipelines SDK API Reference].
* link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/working-on-data-science-projects_nb-server#creating-a-data-science-project_nb-server[Creating a data science project]
* link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/using-data-science-projects_projects#creating-a-data-science-project_projects[Creating a data science project]
* link:{rhoaidocshome}{default-format-url}/working_with_data_science_pipelines/managing-data-science-pipelines_ds-pipelines#configuring-a-pipeline-server_ds-pipelines[Configuring a pipeline server]
* link:{rhoaidocshome}{default-format-url}/working_with_data_science_pipelines/managing-data-science-pipelines_ds-pipelines#importing-a-data-science-pipeline_ds-pipelines[Importing a data science pipeline]
* link:{rhoaidocshome}{default-format-url}/working_with_data_science_pipelines/managing-data-science-pipelines_ds-pipelines#deleting-a-pipeline-server_ds-pipelines[Deleting a pipeline server]
Expand Down
2 changes: 1 addition & 1 deletion modules/enabling-gpu-support-in-data-science.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,6 @@ After installing the NVIDIA GPU Operator, create an accelerator profile as descr
endif::[]
//the following step applies to upstream only
ifdef::upstream[]
After installing the NVIDIA GPU Operator, create an accelerator profile as described in link:{odhdocshome}/working-on-data-science-projects/#working-with-accelerator-profiles_accelerators[Working with accelerator profiles].
After installing the NVIDIA GPU Operator, create an accelerator profile as described in link:{odhdocshome}/working-with-accelerators/[Working with accelerators].
endif::[]

4 changes: 2 additions & 2 deletions modules/enabling-trustyai-service-using-cli.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -66,13 +66,13 @@ endif::[]
ifndef::upstream[]
* If you are using specialized {productname-short} groups, you are part of the user group or admin group (for example, {oai-user-group} or {oai-admin-group}) in OpenShift.

* The data scientist has created a data science project, as described in link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/working-on-data-science-projects_nb-server#creating-a-data-science-project_nb-server[Creating a data science project], that contains (or will contain) the models that the data scientist wants to monitor.
* The data scientist has created a data science project, as described in link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/using-data-science-projects_projects#creating-a-data-science-project_projects[Creating a data science project], that contains (or will contain) the models that the data scientist wants to monitor.
endif::[]

ifdef::upstream[]
* If you are using specialized {productname-short} groups, you are part of the user group or admin group (for example, {odh-user-group} or {odh-admin-group}) in OpenShift.

* The data scientist has created a data science project, as described in link:{odhdocshome}/working-on-data-science-projects/#working-on-data-science-projects_nb-server[Creating a data science project], that contains (or will contain) the models that the data scientist wants to monitor.
* The data scientist has created a data science project, as described in link:{odhdocshome}/working-on-data-science-projects/#creating-a-data-science-project_projects[Creating a data science project], that contains (or will contain) the models that the data scientist wants to monitor.
endif::[]

.Procedure
Expand Down
4 changes: 2 additions & 2 deletions modules/enabling-trustyai-service-using-dashboard.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -73,13 +73,13 @@ endif::[]
ifndef::upstream[]
* If you are using specialized {productname-short} groups, you are part of the user group or admin group (for example, {oai-user-group} or {oai-admin-group}) in OpenShift.

* The data scientist has created a data science project, as described in link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/working-on-data-science-projects_nb-server#creating-a-data-science-project_nb-server[Creating a data science project], that contains (or will contain) the models that the data scientist wants to monitor.
* The data scientist has created a data science project, as described in link:{rhoaidocshome}{default-format-url}/working_on_data_science_projects/using-data-science-projects_projects#creating-a-data-science-project_projects[Creating a data science project], that contains (or will contain) the models that the data scientist wants to monitor.
endif::[]

ifdef::upstream[]
* If you are using specialized {productname-short} groups, you are part of the user group or admin group (for example, {odh-user-group} or {odh-admin-group}) in OpenShift.

* The data scientist has created a data science project, as described in link:{odhdocshome}/working-on-data-science-projects#working-on-data-science-projects_nb-server[Creating a data science project], that contains (or will contain) the models that the data scientist wants to monitor.
* The data scientist has created a data science project, as described in link:{odhdocshome}/working-on-data-science-projects/#creating-a-data-science-project_projects[Creating a data science project], that contains (or will contain) the models that the data scientist wants to monitor.
endif::[]

.Procedure
Expand Down
Loading