Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DS-2066-pipeline-versions initial draft of pipeline versions docs #193

Merged
merged 4 commits into from
Feb 22, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions assemblies/working-with-data-science-pipelines.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ include::modules/defining-a-pipeline.adoc[leveloffset=+2]

include::modules/importing-a-data-science-pipeline.adoc[leveloffset=+2]

include::modules/downloading-a-data-science-pipeline.adoc[leveloffset=+2]
include::modules/downloading-a-data-science-pipeline-version.adoc[leveloffset=+2]

include::modules/deleting-a-data-science-pipeline.adoc[leveloffset=+2]

Expand All @@ -73,7 +73,7 @@ include::modules/scheduling-a-pipeline-run.adoc[leveloffset=+2]

//include::modules/deleting-a-pipeline-experiment.adoc[leveloffset=+2]

include::modules/cloning-a-scheduled-pipeline-run.adoc[leveloffset=+2]
include::modules/duplicating-a-scheduled-pipeline-run.adoc[leveloffset=+2]

include::modules/stopping-a-triggered-pipeline-run.adoc[leveloffset=+2]

Expand Down
5 changes: 0 additions & 5 deletions modules/deleting-a-data-science-pipeline.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,6 @@

[role='_abstract']
You can delete data science pipelines so that they do not appear on the *Data Science Pipelines* page.
//+ - [Chris] - June 1st 2023: As of RHODS 1.27, the important note below is NOT true. So commenting out for now. Uncomment it out when it actually is true, or rewrite it at a future point in time so that it's accurate.
//[IMPORTANT]
//====
//Deleting a data science pipeline deletes any associated artifacts and data connections. This data is permanently deleted and is not recoverable.
//====

.Prerequisites
* You have installed the OpenShift Pipelines operator.
Expand Down
3 changes: 2 additions & 1 deletion modules/deleting-a-pipeline-server.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
= Deleting a pipeline server

[role='_abstract']
After you have finished running your data science pipelines, you can delete the pipeline server. Deleting a pipeline server automatically deletes all of its associated pipelines and runs. If your pipeline data is stored in a database, the database is also deleted along with its meta-data. In addition, after deleting a pipeline server, you cannot create new pipelines or pipeline runs until you create another pipeline server.
After you have finished running your data science pipelines, you can delete the pipeline server. Deleting a pipeline server automatically deletes all of its associated pipelines, pipeline versions, and runs. If your pipeline data is stored in a database, the database is also deleted along with its meta-data. In addition, after deleting a pipeline server, you cannot create new pipelines or pipeline runs until you create another pipeline server.

.Prerequisites
* You have logged in to {productname-long}.
Expand All @@ -22,6 +22,7 @@ endif::[]
The *Pipelines* page opens.
. From the *Project* list, select the project whose pipeline server you want to delete.
. From the *Pipeline server actions* list, select *Delete pipeline server*.
+
The *Delete pipeline server* dialog opens.
. Enter the pipeline server's name in the text field to confirm that you intend to delete it.
. Click *Delete*.
Expand Down
38 changes: 38 additions & 0 deletions modules/deleting-a-pipeline-version.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
:_module-type: PROCEDURE

[id="deleting-a-pipeline-version_{context}"]
= Deleting a pipeline version

[role='_abstract']
You can delete specific versions of a pipeline when you no longer require them. Deleting a default pipeline version automatically changes the default pipeline version to the next most recent version. If no pipeline versions exist, the pipeline persists without a default version.

.Prerequisites
* You have installed the OpenShift Pipelines operator.
* You have logged in to {productname-long}.
ifndef::upstream[]
* If you are using specialized {productname-short} groups, you are part of the user group or admin group (for example, {oai-user-group} or {oai-admin-group}) in OpenShift.
endif::[]
ifdef::upstream[]
* If you are using specialized {productname-short} groups, you are part of the user group or admin group (for example, {odh-user-group} or {odh-admin-group}) in OpenShift.
endif::[]
* You have previously created a data science project that is available and contains a pipeline server.
* You have imported a pipeline to an active and available pipeline server.

.Procedure
. From the {productname-short} dashboard, click *Data Science Pipelines* -> *Pipelines*.
+
The *Pipelines* page opens.
. From the *Project* list, select the project that contains a version of a pipeline that you want to delete.
. On the row containing the pipeline, click *Expand* (image:images/rhoai-expand-icon.png[]).
. On the row containing the pipeline version that you want to delete, select the checkbox.
. Click the action menu (⋮) next to the *Import pipeline* dropdown and select *Delete selected* from the list.
+
The *Delete pipeline version* dialog opens.
. Enter the name of the pipeline version in the text field to confirm that you intend to delete it.
. Click *Delete*.

.Verification
* The pipeline version that you deleted no longer appears on the *Pipelines* page.

//[role='_additional-resources']
//.Additional resources
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
:_module-type: PROCEDURE

[id="downloading-a-data-science-pipeline_{context}"]
= Downloading a data science pipeline
[id="downloading-a-data-science-pipeline-version_{context}"]
= Downloading a data science pipeline version

[role='_abstract']
To make further changes to a data science pipeline that you previously uploaded to {productname-short}, you can download the pipeline's code from the user interface.
To make further changes to a data science pipeline version that you previously uploaded to {productname-short}, you can download pipeline version code from the user interface.

.Prerequisites
* You have installed the OpenShift Pipelines operator.
Expand All @@ -22,17 +22,18 @@ endif::[]
. From the {productname-short} dashboard, click *Data Science Pipelines* -> *Pipelines*.
+
The *Pipelines* page opens.
. From the *Project* list, select the project whose pipeline that you want to download.
. In the *Pipeline name* column, click the name of the pipeline that you want to download.
+
. From the *Project* list, select the project that contains the version that you want to download.
. For a pipeline that contains the version that you want to download, click *Expand* (image:images/rhoai-expand-icon.png[]).
. Click the pipeline version that you want to download.
+
The *Pipeline details* page opens displaying the *Graph* tab.
. Click the *YAML* tab.
+
The page reloads to display an embedded YAML editor showing the pipeline code.
. Click the *Download* button (image:images/rhoai-download-icon.png[]) to download the YAML file containing your pipeline's code to your local machine.
The page reloads to display an embedded YAML editor showing the pipeline version code.
. Click the *Download* button (image:images/rhoai-download-icon.png[]) to download the YAML file containing your pipeline version code to your local machine.

.Verification
* The pipeline code is downloaded to your browser's default directory for downloaded files.
* The pipeline version code downloads to your browser's default directory for downloaded files.

//[role='_additional-resources']
//.Additional resources//
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
:_module-type: PROCEDURE

[id="cloning-a-scheduled-pipeline-run_{context}"]
= Cloning a scheduled pipeline run
[id="duplicating-a-scheduled-pipeline-run_{context}"]
= Duplicating a scheduled pipeline run

[role='_abstract']
To make it easier to schedule runs to execute as part of your pipeline configuration, you can duplicate existing scheduled runs by cloning them.
To make it easier to schedule runs to execute as part of your pipeline configuration, you can duplicate existing scheduled runs by duplicating them.

.Prerequisites
* You have installed the OpenShift Pipelines operator.
Expand All @@ -17,33 +17,34 @@ ifdef::upstream[]
endif::[]
* You have previously created a data science project that is available and contains a configured pipeline server.
* You have imported a pipeline to an active pipeline server.
* You have previously scheduled a run that is available to clone.
* You have previously scheduled a run that is available to duplicate.

.Procedure
. From the {productname-short} dashboard, click *Data Science Pipelines* -> *Runs*.
+
The *Runs* page opens.
. Click the action menu (*⋮*) beside the relevant run and click *Clone*.
. Click the action menu (*⋮*) beside the relevant run and click *Duplicate*.
+
The *Clone* page opens.
. From the *Project* list, select the project that contains the pipeline whose run that you want to clone.
. In the *Name* field, enter a name for the run that you want to clone.
. In the *Description* field, enter a description for the run that you want to clone.
. From the *Pipeline* list, select the pipeline containing the run that you want to clone.
. To configure the run type for the run that you are cloning, in the *Run type* section, perform one of the following sets of actions:
* Select *Run once immediately after create* to specify the run that you are cloning executes once, and immediately after its creation. If you selected this option, skip to step 10.
* Select *Schedule recurring run* to schedule the run that you are cloning to recur.
The *Duplicate* page opens.
. From the *Project* list, select the project that contains the pipeline run that you want to duplicate.
. In the *Name* field, enter a name for the run that you want to duplicate.
. In the *Description* field, enter a description for the run that you want to duplicate.
. From the *Pipeline* list, select the pipeline containing the run that you want to duplicate.
. From the *Pipeline version* list, select the pipeline version containing the run that you want to duplicate.
. To configure the run type for the run that you are duplicating, in the *Run type* section, perform one of the following sets of actions:
* Select *Run once immediately after create* to specify the run that you are duplicating executes once, and immediately after its creation. If you selected this option, skip to step 10.
* Select *Schedule recurring run* to schedule the run that you are duplicating to recur.
. If you selected *Schedule recurring run* in the previous step, to configure the trigger type for the run, perform one of the following actions:
* Select *Periodic* and select the execution frequency from the *Run every* list.
* Select *Cron* to specify the execution schedule in `cron` format. This creates a cron job to execute the run. Click the *Copy* button (image:images/osd-copy.png[]) to copy the cron job schedule to the clipboard. The field furthest to the left represents seconds. For more information about scheduling tasks using the supported `cron` format, see link:https://pkg.go.dev/github.com/robfig/cron#hdr-CRON_Expression_Format[Cron Expression Format].
. If you selected *Schedule recurring run* in step 7, configure the duration for the run that you are cloning.
. If you selected *Schedule recurring run* in step 7, configure the duration for the run that you are duplicating.
.. Select the *Start date* check box to specify a start date for the run. Select the start date using the calendar tool and the start time from the list of times.
.. Select the *End date* check box to specify an end date for the run. Select the end date using the calendar tool and the end time from the list of times.
. In the *Parameters* section, configure the input parameters for the run that you are cloning by selecting the appropriate parameters from the list.
. In the *Parameters* section, configure the input parameters for the run that you are duplicating by selecting the appropriate parameters from the list.
. Click *Create*.

.Verification
* The pipeline run that you cloned is shown in the *Scheduled* tab on the *Runs* page.
* The pipeline run that you duplicated is shown in the *Scheduled* tab on the *Runs* page.

//[role='_additional-resources']
//.Additional resources
11 changes: 11 additions & 0 deletions modules/overview-of-pipeline-versions.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
:_module-type: CONCEPT

[id='overview-of-pipeline-versions_{context}']
= Overview of pipeline versions

[role='_abstract']
You can manage incremental changes to pipelines in {productname-short} by using versioning. This allows you to develop and deploy pipelines iteratively, preserving a record of your changes. You can track and manage your changes on the {productname-short} dashboard, allowing you to schedule and execute runs against all available versions of your pipeline.

//[role="_additional-resources"]
//.Additional resources
//*
5 changes: 3 additions & 2 deletions modules/scheduling-a-pipeline-run.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -22,13 +22,14 @@ endif::[]
. From the {productname-short} dashboard, click *Data Science Pipelines* -> *Pipelines*.
+
The *Pipelines* page opens.
. From the *Project* list, select the project that you want to create a run for.
. Click the action menu (*⋮*) beside the relevant pipeline and click *Create run*.
+
The *Create run* page opens.
. From the *Project* list, select the project that contains the pipeline you want to create a run for.
. In the *Name* field, enter a name for the run.
. In the *Description* field, enter a description for the run.
. From the *Pipeline* list, select the pipeline to create a run for. Alternatively, to upload a new pipeline, click *Upload new pipeline* and fill in the relevant fields in the *Import pipeline* dialog.
. From the *Pipeline* list, select the pipeline that you want to create a run for. Alternatively, to create a new pipeline, click *Create new pipeline* and complete the relevant fields in the *Import pipeline* dialog.
. From the *Pipeline version* list, select the pipeline version to create a run for. Alternatively, to upload a new version, click *Upload new version* and complete the relevant fields in the *Upload new version* dialog.
. Configure the run type by performing one of the following sets of actions:
* Select *Run once immediately after creation* to specify the run executes once, and immediately after its creation.
* Select *Schedule recurring run* to schedule the run to recur.
Expand Down
43 changes: 43 additions & 0 deletions modules/uploading-a-pipeline-version.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
:_module-type: PROCEDURE

[id="uploading-a-pipeline-version_{context}"]
= Uploading a pipeline version

[role='_abstract']
You can upload a YAML file to an active pipeline server that contains the latest version of your pipeline. This file consists of a Kubeflow pipeline compiled with the Tekton compiler. After you upload a pipeline version to a pipeline server, you can execute it by creating a pipeline run.

.Prerequisites
* You have installed the OpenShift Pipelines operator.
* You have logged in to {productname-long}.
ifndef::upstream[]
* If you are using specialized {productname-short} groups, you are part of the user group or admin group (for example, {oai-user-group} or {oai-admin-group} ) in OpenShift.
endif::[]
ifdef::upstream[]
* If you are using specialized {productname-short} groups, you are part of the user group or admin group (for example, {odh-user-group} or {odh-admin-group}) in OpenShift.
endif::[]
* You have previously created a data science project that is available and contains a configured pipeline server.
* You have a pipeline version available and ready to upload.

.Procedure
. From the {productname-short} dashboard, click *Data Science Pipelines* -> *Pipelines*.
+
The *Pipelines* page opens.
. From the *Project* list, select the project that you want to upload a pipeline version to.
. Click the *Import pipeline* dropdown list and select *Upload new version*.
+
The *Upload new version* dialog opens.
. Enter the details for the pipeline version that you are uploading.
.. From the *Pipeline* list, select the pipeline that you want to upload your pipeline version to.
.. In the *Pipeline version name* field, confirm the name for the pipeline version, and change it if necessary.
.. In the *Pipeline version description* field, enter a description for the pipeline version.
.. Click *Upload*. Alternatively, drag the file from your local machine's file system and drop it in the designated area in the *Upload new version* dialog.
+
A file browser opens.
.. Navigate to the file containing the pipeline version code and click *Select*.
.. Click *Upload*.

.Verification
* The pipeline version that you uploaded is displayed on the *Pipelines* page. Click *Expand* (image:images/rhoai-expand-icon.png[]) on the row containing the pipeline to view its versions.

//[role='_additional-resources']
//.Additional resources//
32 changes: 32 additions & 0 deletions modules/viewing-pipeline-versions.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
:_module-type: PROCEDURE

[id="viewing-pipeline-versions_{context}"]
= Viewing pipeline versions

[role='_abstract']
You can view all versions for a pipeline on the *Pipelines* page.

.Prerequisites
* You have installed the OpenShift Pipelines operator.
* You have logged in to {productname-long}.
ifndef::upstream[]
* If you are using specialized {productname-short} groups, you are part of the user group or admin group (for example, {oai-user-group} or {oai-admin-group}) in OpenShift.
endif::[]
ifdef::upstream[]
* If you are using specialized {productname-short} groups, you are part of the user group or admin group (for example, {odh-user-group} or {odh-admin-group}) in OpenShift.
endif::[]
* You have previously created a data science project that is available and contains a pipeline server.
* You have a pipeline available on an active and available pipeline server.

.Procedure
. From the {productname-short} dashboard, click *Data Science Pipelines* -> *Pipelines*.
+
The *Pipelines* page opens.
. From the *Project* list, select the project containing the pipeline versions that you want to view.
. Click *Expand* (image:images/rhoai-expand-icon.png[]) on the row containing the pipeline that you want to view versions for.

.Verification
* You can view the versions of the pipeline on the *Pipelines* page.

//[role='_additional-resources']
//.Additional resources
2 changes: 1 addition & 1 deletion modules/viewing-scheduled-pipeline-runs.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
= Viewing scheduled pipeline runs

[role='_abstract']
You can view a list of pipeline runs that are scheduled for execution in {productname-short}. From this list, you can view details relating to your pipeline's runs, such as the pipeline that the run belongs to. You can also view the run's status, execution frequency, and schedule.
You can view a list of pipeline runs that are scheduled for execution in {productname-short}. From this list, you can view details relating to your pipeline runs, such as the pipeline version that the run belongs to. You can also view the run status, execution frequency, and schedule.

.Prerequisites

Expand Down
35 changes: 35 additions & 0 deletions modules/viewing-the-details-of-a-pipeline-version.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
:_module-type: PROCEDURE

[id="viewing-the-details-of-a-pipeline-version_{context}"]
= Viewing the details of a pipeline version
chtyler marked this conversation as resolved.
Show resolved Hide resolved

[role='_abstract']
You can view the details of a pipeline version that you have uploaded to {productname-long}, such as its graph and YAML code.

.Prerequisites
* You have installed the OpenShift Pipelines operator.
* You have logged in to {productname-long}.
ifndef::upstream[]
* If you are using specialized {productname-short} groups, you are part of the user group or admin group (for example, {oai-user-group} or {oai-admin-group}) in OpenShift.
endif::[]
ifdef::upstream[]
* If you are using specialized {productname-short} groups, you are part of the user group or admin group (for example, {odh-user-group} or {odh-admin-group}) in OpenShift.
endif::[]
* You have previously created a data science project that is available and contains a pipeline server.
* You have a pipeline available on an active and available pipeline server.

.Procedure
. From the {productname-short} dashboard, click *Data Science Pipelines* -> *Pipelines*.
+
The *Pipelines* page opens.
. From the *Project* list, select the project containing the pipeline versions that you want to view details for.
. Click *Expand* (image:images/rhoai-expand-icon.png[]) on the row containing the pipeline that you want to view versions for.
. Click the pipeline version that you want to view the details of.
+
The *Pipeline details* page opens, displaying the *Graph* and *YAML* tabs.

.Verification
* On the *Pipeline details* page, you can view the pipeline graph and YAML code.

//[role='_additional-resources']
//.Additional resources
Loading