From a833bf84e87cc13c78f938dff71fdcec53f0f2d2 Mon Sep 17 00:00:00 2001 From: Gayathri M Date: Thu, 20 Jun 2024 13:01:32 +0530 Subject: [PATCH 1/3] Doc update for ODF --- examples/openshift-data-foundation/README.md | 2 +- .../satellite/odf-local/4.13/README.md | 4 ++++ .../satellite/odf-local/4.14/README.md | 5 +++++ .../satellite/odf-local/4.15/README.md | 5 +++++ .../satellite/odf-remote/4.13/README.md | 5 +++++ .../satellite/odf-remote/4.14/README.md | 5 +++++ .../satellite/odf-remote/4.15/README.md | 5 +++++ .../openshift-data-foundation/vpc-worker-replace/README.md | 2 +- 8 files changed, 31 insertions(+), 2 deletions(-) diff --git a/examples/openshift-data-foundation/README.md b/examples/openshift-data-foundation/README.md index 7c09ba4aa84..f0c7a9544df 100644 --- a/examples/openshift-data-foundation/README.md +++ b/examples/openshift-data-foundation/README.md @@ -6,7 +6,7 @@ OpenShift Data Foundation is a highly available storage solution that you can us If you'd like to Deploy and Manage the different configurations for ODF on a Red Hat OpenShift Cluster (VPC) head over to the [addon](https://github.com/IBM-Cloud/terraform-provider-ibm/tree/master/examples/openshift-data-foundation/addon) folder. -## Updating or replacing VPC worker nodes that use OpenShift Data Foundation +## Updating or replacing worker nodes that use OpenShift Data Foundation on ROKS VPC If you'd like to update or replace the different worker nodes with ODF enabled, head over to the [vpc-worker-replace](https://github.com/IBM-Cloud/terraform-provider-ibm/tree/master/examples/openshift-data-foundation/vpc-worker-replace) folder. This inherently covers the worker replace steps of sequential cordon, drain, and replace. diff --git a/examples/openshift-data-foundation/satellite/odf-local/4.13/README.md b/examples/openshift-data-foundation/satellite/odf-local/4.13/README.md index 3cec5f07096..3593da16320 100644 --- a/examples/openshift-data-foundation/satellite/odf-local/4.13/README.md +++ b/examples/openshift-data-foundation/satellite/odf-local/4.13/README.md @@ -108,7 +108,11 @@ In this example we set the `updateConfigRevision` parameter to true in order to You could also use `updateAssignments` to directly update the storage configuration's assignments, but if you have a dependent `storage_assignment` resource, it's lifecycle will be affected. It it recommended to use this parameter when you've only defined the `storage_configuration` resource. ### Upgrade of ODF +**Step 1:** +Follow the [Satellite worker upgrade documentation](https://cloud.ibm.com/docs/satellite?topic=satellite-sat-storage-odf-update&interface=ui) step 1 to step 7 to perform worker upgrade. +**Step 2:** +Follow the below steps to upgrade ODF to next version. The following variables in the `input.tfvars` file should be changed in order to upgrade the ODF add-on and the Ocscluster CRD. * storageTemplateVersion - Specify the version you wish to upgrade to diff --git a/examples/openshift-data-foundation/satellite/odf-local/4.14/README.md b/examples/openshift-data-foundation/satellite/odf-local/4.14/README.md index 8efbc71266c..b91b3236a42 100644 --- a/examples/openshift-data-foundation/satellite/odf-local/4.14/README.md +++ b/examples/openshift-data-foundation/satellite/odf-local/4.14/README.md @@ -115,6 +115,11 @@ You could also use `updateAssignments` to directly update the storage configurat ### Upgrade of ODF +**Step 1:** +Follow the [Satellite worker upgrade documentation](https://cloud.ibm.com/docs/satellite?topic=satellite-sat-storage-odf-update&interface=ui) step 1 to step 7 to perform worker upgrade. + +**Step 2:** +Follow the below steps to upgrade ODF to next version. The following variables in the `input.tfvars` file should be changed in order to upgrade the ODF add-on and the Ocscluster CRD. * storageTemplateVersion - Specify the version you wish to upgrade to diff --git a/examples/openshift-data-foundation/satellite/odf-local/4.15/README.md b/examples/openshift-data-foundation/satellite/odf-local/4.15/README.md index 6efecedf23e..e78cf3e5a80 100644 --- a/examples/openshift-data-foundation/satellite/odf-local/4.15/README.md +++ b/examples/openshift-data-foundation/satellite/odf-local/4.15/README.md @@ -115,6 +115,11 @@ You could also use `updateAssignments` to directly update the storage configurat ### Upgrade of ODF +**Step 1:** +Follow the [Satellite worker upgrade documentation](https://cloud.ibm.com/docs/satellite?topic=satellite-sat-storage-odf-update&interface=ui) step 1 to step 7 to perform worker upgrade. + +**Step 2:** +Follow the below steps to upgrade ODF to next version. The following variables in the `input.tfvars` file should be changed in order to upgrade the ODF add-on and the Ocscluster CRD. * storageTemplateVersion - Specify the version you wish to upgrade to diff --git a/examples/openshift-data-foundation/satellite/odf-remote/4.13/README.md b/examples/openshift-data-foundation/satellite/odf-remote/4.13/README.md index 602b9327c8f..61711b7838b 100644 --- a/examples/openshift-data-foundation/satellite/odf-remote/4.13/README.md +++ b/examples/openshift-data-foundation/satellite/odf-remote/4.13/README.md @@ -109,6 +109,11 @@ You could also use `updateAssignments` to directly update the storage configurat ### Upgrade of ODF +**Step 1:** +Follow the [Satellite worker upgrade documentation](https://cloud.ibm.com/docs/satellite?topic=satellite-sat-storage-odf-update&interface=ui) step 1 to step 7 to perform worker upgrade. + +**Step 2:** +Follow the below steps to upgrade ODF to next version. The following variables in the `input.tfvars` file should be changed in order to upgrade the ODF add-on and the Ocscluster CRD. * storageTemplateVersion - Specify the version you wish to upgrade to diff --git a/examples/openshift-data-foundation/satellite/odf-remote/4.14/README.md b/examples/openshift-data-foundation/satellite/odf-remote/4.14/README.md index 1a07c9e6703..72ab614ee90 100644 --- a/examples/openshift-data-foundation/satellite/odf-remote/4.14/README.md +++ b/examples/openshift-data-foundation/satellite/odf-remote/4.14/README.md @@ -115,6 +115,11 @@ You could also use `updateAssignments` to directly update the storage configurat ### Upgrade of ODF +**Step 1:** +Follow the [Satellite worker upgrade documentation](https://cloud.ibm.com/docs/satellite?topic=satellite-sat-storage-odf-update&interface=ui) step 1 to step 7 to perform worker upgrade. + +**Step 2:** +Follow the below steps to upgrade ODF to next version. The following variables in the `input.tfvars` file should be changed in order to upgrade the ODF add-on and the Ocscluster CRD. * storageTemplateVersion - Specify the version you wish to upgrade to diff --git a/examples/openshift-data-foundation/satellite/odf-remote/4.15/README.md b/examples/openshift-data-foundation/satellite/odf-remote/4.15/README.md index ea43906b416..f640cbf1962 100644 --- a/examples/openshift-data-foundation/satellite/odf-remote/4.15/README.md +++ b/examples/openshift-data-foundation/satellite/odf-remote/4.15/README.md @@ -115,6 +115,11 @@ You could also use `updateAssignments` to directly update the storage configurat ### Upgrade of ODF +**Step 1:** +Follow the [worker upgrade documentation](https://cloud.ibm.com/docs/satellite?topic=satellite-sat-storage-odf-update&interface=ui) from step 1 to step 7 to perform worker upgrade. + +**Step 2:** +Follow the below steps to upgrade ODF to next version. The following variables in the `input.tfvars` file should be changed in order to upgrade the ODF add-on and the Ocscluster CRD. * storageTemplateVersion - Specify the version you wish to upgrade to diff --git a/examples/openshift-data-foundation/vpc-worker-replace/README.md b/examples/openshift-data-foundation/vpc-worker-replace/README.md index 3181dbb8c2e..0e7e8109d7e 100644 --- a/examples/openshift-data-foundation/vpc-worker-replace/README.md +++ b/examples/openshift-data-foundation/vpc-worker-replace/README.md @@ -1,4 +1,4 @@ -# OpenShift-Data-Foundation VPC Worker Replace +# OpenShift-Data-Foundation VPC ROKS Worker Replace This example shows how to replace & update the Kubernetes VPC Gen-2 worker installed with Openshift-Data-Foundation to the latest patch in the specified cluster. From 6eb1f7f0cfa38ce5ab1d8768ff66fd8b8db70ec9 Mon Sep 17 00:00:00 2001 From: Gayathri M Date: Thu, 20 Jun 2024 15:12:53 +0530 Subject: [PATCH 2/3] workerpool added for 4.15 --- .../addon/4.12.0/README.md | 2 +- .../addon/4.13.0/README.md | 2 +- .../addon/4.14.0/README.md | 2 +- .../addon/4.15.0/README.md | 5 +++- .../addon/4.15.0/main.tf | 25 ++++++++-------- .../addon/4.15.0/ocscluster/main.tf | 3 +- .../addon/4.15.0/schematics.tfvars | 3 +- .../addon/4.15.0/variables.tf | 29 ++++++++++++------- .../satellite/odf-local/4.15/README.md | 9 ++++-- .../satellite/odf-local/4.15/input.tfvars | 5 ++-- .../satellite/odf-local/4.15/main.tf | 1 + .../satellite/odf-local/4.15/variables.tf | 12 ++++++-- .../satellite/odf-remote/4.15/README.md | 7 ++++- .../satellite/odf-remote/4.15/input.tfvars | 5 ++-- .../satellite/odf-remote/4.15/main.tf | 1 + .../satellite/odf-remote/4.15/variables.tf | 10 +++++-- 16 files changed, 80 insertions(+), 41 deletions(-) diff --git a/examples/openshift-data-foundation/addon/4.12.0/README.md b/examples/openshift-data-foundation/addon/4.12.0/README.md index 884e9f4ade1..29ba5e821c7 100644 --- a/examples/openshift-data-foundation/addon/4.12.0/README.md +++ b/examples/openshift-data-foundation/addon/4.12.0/README.md @@ -44,7 +44,7 @@ You do not have to change anything in the `ibm-odf-addon` and `ocscluster` folde To run this example on your Terminal, first download this directory i.e `examples/openshift-data-foundation/` ```bash -$ cd addon +$ cd addon/4.12.0 ``` ```bash diff --git a/examples/openshift-data-foundation/addon/4.13.0/README.md b/examples/openshift-data-foundation/addon/4.13.0/README.md index 035ae723977..5635fe7f62f 100644 --- a/examples/openshift-data-foundation/addon/4.13.0/README.md +++ b/examples/openshift-data-foundation/addon/4.13.0/README.md @@ -44,7 +44,7 @@ You do not have to change anything in the `ibm-odf-addon` and `ocscluster` folde To run this example on your Terminal, first download this directory i.e `examples/openshift-data-foundation/` ```bash -$ cd addon +$ cd addon/4.13.0 ``` ```bash diff --git a/examples/openshift-data-foundation/addon/4.14.0/README.md b/examples/openshift-data-foundation/addon/4.14.0/README.md index 7fe4e508ddd..38ff4dbf94b 100644 --- a/examples/openshift-data-foundation/addon/4.14.0/README.md +++ b/examples/openshift-data-foundation/addon/4.14.0/README.md @@ -44,7 +44,7 @@ You do not have to change anything in the `ibm-odf-addon` and `ocscluster` folde To run this example on your Terminal, first download this directory i.e `examples/openshift-data-foundation/` ```bash -$ cd addon +$ cd addon/4.14.0 ``` ```bash diff --git a/examples/openshift-data-foundation/addon/4.15.0/README.md b/examples/openshift-data-foundation/addon/4.15.0/README.md index 93bb4cfeb13..2f80a6ab8fa 100644 --- a/examples/openshift-data-foundation/addon/4.15.0/README.md +++ b/examples/openshift-data-foundation/addon/4.15.0/README.md @@ -44,7 +44,7 @@ You do not have to change anything in the `ibm-odf-addon` and `ocscluster` folde To run this example on your Terminal, first download this directory i.e `examples/openshift-data-foundation/` ```bash -$ cd addon +$ cd addon/4.15.0 ``` ```bash @@ -93,6 +93,7 @@ ocsUpgrade = "false" osdDevicePaths = null osdSize = "512Gi" osdStorageClassName = "ibmc-vpc-block-metro-10iops-tier" +workerPools = null workerNodes = null encryptionInTransit = false taintNodes = false @@ -111,6 +112,7 @@ The following variables in the `schematics.tfvars` file can be edited * numOfOsd - To scale your storage * workerNodes - To increase the number of Worker Nodes with ODF +* workerPools - To increase the number of Storage Nodes by adding more nodes using workerpool ```hcl # For CRD Management @@ -175,6 +177,7 @@ ocsUpgrade = "false" -> "true" | ignoreNoobaa | Set to true if you do not want MultiCloudGateway | `bool` | no | false | ocsUpgrade | Set to true to upgrade Ocscluster | `string` | no | false | osdDevicePaths | IDs of the disks to be used for OSD pods if using local disks or standard classic cluster | `string` | no | null +| workerPools | A list of the worker pools names where you want to deploy ODF. Either specify workerpool or workernodes to deploy ODF, if not specified ODF will deploy on all nodes | `string` | no | null | workerNodes | Provide the names of the worker nodes on which to install ODF. Leave blank to install ODF on all worker nodes | `string` | no | null | encryptionInTransit |To enable in-transit encryption. Enabling in-transit encryption does not affect the existing mapped or mounted volumes. After a volume is mapped/mounted, it retains the encryption settings that were used when it was initially mounted. To change the encryption settings for existing volumes, they must be remounted again one-by-one. | `bool` | no | false | taintNodes | Specify true to taint the selected worker nodes so that only OpenShift Data Foundation pods can run on those nodes. Use this option only if you limit ODF to a subset of nodes in your cluster. | `bool` | no | false diff --git a/examples/openshift-data-foundation/addon/4.15.0/main.tf b/examples/openshift-data-foundation/addon/4.15.0/main.tf index e9d130535dd..7a8b06d6f46 100644 --- a/examples/openshift-data-foundation/addon/4.15.0/main.tf +++ b/examples/openshift-data-foundation/addon/4.15.0/main.tf @@ -1,14 +1,14 @@ resource "null_resource" "customResourceGroup" { provisioner "local-exec" { - + when = create command = "sh ./createcrd.sh" - + } provisioner "local-exec" { - + when = destroy command = "sh ./deletecrd.sh" @@ -18,7 +18,7 @@ resource "null_resource" "customResourceGroup" { null_resource.addOn ] - + } @@ -35,9 +35,9 @@ resource "null_resource" "addOn" { when = destroy command = "sh ./deleteaddon.sh" - + } - + } @@ -47,18 +47,19 @@ resource "null_resource" "updateCRD" { numOfOsd = var.numOfOsd ocsUpgrade = var.ocsUpgrade workerNodes = var.workerNodes + workerPools = var.workerPools osdDevicePaths = var.osdDevicePaths taintNodes = var.taintNodes addSingleReplicaPool = var.addSingleReplicaPool resourceProfile = var.resourceProfile enableNFS = var.enableNFS } - + provisioner "local-exec" { - + command = "sh ./updatecrd.sh" - + } depends_on = [ @@ -78,11 +79,11 @@ resource "null_resource" "upgradeODF" { provisioner "local-exec" { command = "sh ./updateodf.sh" - + } - + depends_on = [ null_resource.customResourceGroup, null_resource.addOn ] -} \ No newline at end of file +} diff --git a/examples/openshift-data-foundation/addon/4.15.0/ocscluster/main.tf b/examples/openshift-data-foundation/addon/4.15.0/ocscluster/main.tf index cf38d92459f..43d6d31bf6d 100644 --- a/examples/openshift-data-foundation/addon/4.15.0/ocscluster/main.tf +++ b/examples/openshift-data-foundation/addon/4.15.0/ocscluster/main.tf @@ -50,6 +50,7 @@ resource "kubernetes_manifest" "ocscluster_ocscluster_auto" { "osdDevicePaths" = var.osdDevicePaths==null ? null : split(",", var.osdDevicePaths), "osdSize" = var.osdSize, "osdStorageClassName" = var.osdStorageClassName, + "workerPools" = var.workerPools==null ? null : split(",", var.workerPools), "workerNodes" = var.workerNodes==null ? null : split(",", var.workerNodes), "encryptionInTransit" = var.encryptionInTransit, "taintNodes" = var.taintNodes, @@ -65,4 +66,4 @@ resource "kubernetes_manifest" "ocscluster_ocscluster_auto" { field_manager { force_conflicts = true } -} \ No newline at end of file +} diff --git a/examples/openshift-data-foundation/addon/4.15.0/schematics.tfvars b/examples/openshift-data-foundation/addon/4.15.0/schematics.tfvars index f86fc55992c..383fd86e37c 100644 --- a/examples/openshift-data-foundation/addon/4.15.0/schematics.tfvars +++ b/examples/openshift-data-foundation/addon/4.15.0/schematics.tfvars @@ -25,6 +25,7 @@ ocsUpgrade = "false" osdDevicePaths = null osdSize = "512Gi" osdStorageClassName = "ibmc-vpc-block-metro-10iops-tier" +workerPools = null workerNodes = null encryptionInTransit = false taintNodes = false @@ -33,4 +34,4 @@ prepareForDisasterRecovery = false disableNoobaaLB = false useCephRBDAsDefaultStorageClass = false enableNFS = false -resourceProfile = "balanced" \ No newline at end of file +resourceProfile = "balanced" diff --git a/examples/openshift-data-foundation/addon/4.15.0/variables.tf b/examples/openshift-data-foundation/addon/4.15.0/variables.tf index f1ce8fc71d6..648d9b25afc 100644 --- a/examples/openshift-data-foundation/addon/4.15.0/variables.tf +++ b/examples/openshift-data-foundation/addon/4.15.0/variables.tf @@ -2,7 +2,7 @@ variable "ibmcloud_api_key" { type = string description = "IBM Cloud API Key" - + } variable "cluster" { @@ -15,7 +15,7 @@ variable "region" { type = string description = "Enter Cluster Region" - + } variable "odfVersion" { @@ -23,7 +23,7 @@ variable "odfVersion" { type = string default = "4.15.0" description = "Provide the ODF Version you wish to install on your cluster" - + } variable "numOfOsd" { @@ -31,7 +31,7 @@ variable "numOfOsd" { type = string default = "1" description = "Number of Osd" - + } variable "osdDevicePaths" { @@ -48,7 +48,7 @@ variable "ocsUpgrade" { default = "false" description = "Set to true to upgrade Ocscluster" - + } variable "clusterEncryption" { @@ -62,14 +62,14 @@ variable "billingType" { type = string default = "advanced" description = "Choose between advanced and essentials" - + } variable "ignoreNoobaa" { type = bool default = false description = "Set to true if you do not want MultiCloudGateway" - + } variable "osdSize" { @@ -82,7 +82,7 @@ variable "osdStorageClassName" { type = string default = "ibmc-vpc-block-metro-10iops-tier" description = "Enter the storage class to be used to provision block volumes for Object Storage Daemon (OSD) pods." - + } variable "autoDiscoverDevices" { @@ -98,7 +98,7 @@ variable "hpcsEncryption" { type = string default = "false" description = "Set to true to enable HPCS Encryption" - + } variable "hpcsServiceName" { @@ -115,6 +115,13 @@ variable "hpcsSecretName" { description = "Please provide the HPCS secret name" } +variable "workerPools" { + + type = string + default = null + description = "A list of the worker pool names where you want to deploy ODF. Either specify workerpool or workernodes to deploy ODF, if not specified ODF will deploy on all nodes" +} + variable "workerNodes" { type = string @@ -148,7 +155,7 @@ variable "encryptionInTransit" { type = bool default = false description = "Enter true to enable in-transit encryption. Enabling in-transit encryption does not affect the existing mapped or mounted volumes. After a volume is mapped/mounted, it retains the encryption settings that were used when it was initially mounted. To change the encryption settings for existing volumes, they must be remounted again one-by-one." - + } variable "taintNodes" { @@ -204,4 +211,4 @@ variable "resourceProfile" { default = "balanced" description = "Provides an option to choose a resource profile based on the availability of resources during deployment. Choose between lean, balanced and performance." -} \ No newline at end of file +} diff --git a/examples/openshift-data-foundation/satellite/odf-local/4.15/README.md b/examples/openshift-data-foundation/satellite/odf-local/4.15/README.md index e78cf3e5a80..c1c97b72c2c 100644 --- a/examples/openshift-data-foundation/satellite/odf-local/4.15/README.md +++ b/examples/openshift-data-foundation/satellite/odf-local/4.15/README.md @@ -40,7 +40,7 @@ https://cloud.ibm.com/docs/schematics?topic=schematics-get-started-terraform The default input.tfvars is given below, the user should just change the value of the parameters in accorandance to their requirment. ```hcl -# Common for both storage configuration and assignment +# Common for both storage configuration and assignment ibmcloud_api_key = "" location = "" #Location of your storage configuration and assignment configName = "" #Name of your storage configuration @@ -66,6 +66,7 @@ ibmCosLocation = null ignoreNoobaa = false numOfOsd = "1" ocsUpgrade = "false" +workerPools = null workerNodes = null encryptionInTransit = false disableNoobaaLB = false @@ -103,11 +104,13 @@ The following variables in the `input.tfvars` file can be edited * numOfOsd - To scale your storage * workerNodes - To increase the number of Worker Nodes with ODF +* workerPools - To increase the number of Worker Nodes with ODF by including new workerpools ```hcl numOfOsd = "1" -> "2" workerNodes = null -> "worker_1_ID,worker_2_ID" updateConfigRevision = true +workerPools = "workerpool_1" -> "workerpool_1,workerpool_2" ``` In this example we set the `updateConfigRevision` parameter to true in order to update our storage assignment with the latest configuration revision i.e the OcsCluster CRD is updated with the latest changes. @@ -175,7 +178,8 @@ Note this operation deletes the existing configuration and it's respective assig | ignoreNoobaa | Set to true if you do not want MultiCloudGateway | `bool` | no | false | ocsUpgrade | Set to true to upgrade Ocscluster | `string` | no | false | osdDevicePaths | IDs of the disks to be used for OSD pods if using local disks or standard classic cluster | `string` | no | null -| workerNodes | Provide the names of the worker nodes on which to install ODF. Leave blank to install ODF on all worker nodes | `string` | no | null +| workerPools | Provide the names/ID of the workerpool on which to install ODF. Specify either workerpool or worker nodes to select storage nodes. If none of them specified, ODF will install on all workers | `string` | no | null +| workerNodes | Provide the names of the worker nodes on which to install ODF. | `string` | no | null | encryptionInTransit |To enable in-transit encryption. Enabling in-transit encryption does not affect the existing mapped or mounted volumes. After a volume is mapped/mounted, it retains the encryption settings that were used when it was initially mounted. To change the encryption settings for existing volumes, they must be remounted again one-by-one. | `bool` | no | false | taintNodes | Specify true to taint the selected worker nodes so that only OpenShift Data Foundation pods can run on those nodes. Use this option only if you limit ODF to a subset of nodes in your cluster. | `bool` | no | false | addSingleReplicaPool | Specify true to create a single replica pool without data replication, increasing the risk of data loss, data corruption, and potential system instability. | `bool` | no | false @@ -190,5 +194,6 @@ Refer - https://cloud.ibm.com/docs/satellite?topic=satellite-storage-odf-local&i ## Note * Users should only change the values of the variables within quotes, variables should be left untouched with the default values if they are not set. +* `workerPools` takes a string containing comma separated values of the names of the workerpool you wish to enable ODF on. Specify either workerpool or worker nodes to select storage nodes. If none of them specified, ODF will install on all workers * `workerNodes` takes a string containing comma separated values of the names of the worker nodes you wish to enable ODF on. * During ODF Storage Template Update, it is recommended to delete all terraform related assignments before handed, as their lifecycle will be affected, during update new storage assignments are made back internally with new UUIDs. diff --git a/examples/openshift-data-foundation/satellite/odf-local/4.15/input.tfvars b/examples/openshift-data-foundation/satellite/odf-local/4.15/input.tfvars index 12695afe696..a3032807329 100644 --- a/examples/openshift-data-foundation/satellite/odf-local/4.15/input.tfvars +++ b/examples/openshift-data-foundation/satellite/odf-local/4.15/input.tfvars @@ -2,7 +2,7 @@ ## Please change according to your configuratiom ## -# Common for both storage configuration and assignment +# Common for both storage configuration and assignment ibmcloud_api_key = "" location = "" #Location of your storage configuration and assignment configName = "" #Name of your storage configuration @@ -30,6 +30,7 @@ ibmCosLocation = null ignoreNoobaa = false numOfOsd = "1" ocsUpgrade = "false" +workerPools = null workerNodes = null encryptionInTransit = false disableNoobaaLB = false @@ -56,4 +57,4 @@ updateConfigRevision = false ## NOTE ## # The following variables will cause issues to your storage assignment lifecycle, so please use only with a storage configuration resource. deleteAssignments = false -updateAssignments = false \ No newline at end of file +updateAssignments = false diff --git a/examples/openshift-data-foundation/satellite/odf-local/4.15/main.tf b/examples/openshift-data-foundation/satellite/odf-local/4.15/main.tf index b7d1984fb8d..f8bf9dd51b6 100644 --- a/examples/openshift-data-foundation/satellite/odf-local/4.15/main.tf +++ b/examples/openshift-data-foundation/satellite/odf-local/4.15/main.tf @@ -35,6 +35,7 @@ resource "ibm_satellite_storage_configuration" "storage_configuration" { "perform-cleanup"= var.performCleanup, "disable-noobaa-LB"= var.disableNoobaaLB, "encryption-intransit"= var.encryptionInTransit, + "worker-pools"=var.workerPools, "worker-nodes"= var.workerNodes, "add-single-replica-pool" = var.addSingleReplicaPool, "taint-nodes" = var.taintNodes, diff --git a/examples/openshift-data-foundation/satellite/odf-local/4.15/variables.tf b/examples/openshift-data-foundation/satellite/odf-local/4.15/variables.tf index 8c72dd62980..c96922ea016 100644 --- a/examples/openshift-data-foundation/satellite/odf-local/4.15/variables.tf +++ b/examples/openshift-data-foundation/satellite/odf-local/4.15/variables.tf @@ -129,7 +129,7 @@ variable "osdStorageClassName" { type = string default = "ibmc-vpc-block-metro-10iops-tier" description = "Enter the storage class to be used to provision block volumes for Object Storage Daemon (OSD) pods." - + } variable "autoDiscoverDevices" { @@ -156,10 +156,16 @@ variable "kmsSecretName" { description = "Please provide the HPCS secret name" } +variable "workerPools" { + type = string + default = null + description = "Provide the names/ID of the workerpool on which to install ODF. Specify either workerpool or worker nodes to select storage nodes. If none of them specified, ODF will install on all workers." +} + variable "workerNodes" { type = string default = null - description = "Provide the names of the worker nodes on which to install ODF. Leave blank to install ODF on all worker nodes." + description = "Provide the names of the worker nodes on which to install ODF." } variable "kmsInstanceId" { @@ -260,4 +266,4 @@ variable "resourceProfile" { default = "balanced" description = "Provides an option to choose a resource profile based on the availability of resources during deployment. Choose between lean, balanced and performance." -} \ No newline at end of file +} diff --git a/examples/openshift-data-foundation/satellite/odf-remote/4.15/README.md b/examples/openshift-data-foundation/satellite/odf-remote/4.15/README.md index f640cbf1962..742d0c469d3 100644 --- a/examples/openshift-data-foundation/satellite/odf-remote/4.15/README.md +++ b/examples/openshift-data-foundation/satellite/odf-remote/4.15/README.md @@ -40,7 +40,7 @@ https://cloud.ibm.com/docs/schematics?topic=schematics-get-started-terraform The default input.tfvars is given below, the user should just change the value of the parameters in accorandance to their requirment. ```hcl -# Common for both storage configuration and assignment +# Common for both storage configuration and assignment ibmcloud_api_key = "" location = "" #Location of your storage configuration and assignment configName = "" #Name of your storage configuration @@ -66,6 +66,7 @@ numOfOsd = "1" ocsUpgrade = "false" osdSize = "512Gi" osdStorageClassName = "ibmc-vpc-block-metro-5iops-tier" +workerPools = null workerNodes = null encryptionInTransit = false disableNoobaaLB = false @@ -103,11 +104,13 @@ The following variables in the `input.tfvars` file can be edited * numOfOsd - To scale your storage * workerNodes - To increase the number of Worker Nodes with ODF +* workerPools - To increase the number of Worker Nodes with ODF by including new workerpools ```hcl numOfOsd = "1" -> "2" workerNodes = null -> "worker_1_ID,worker_2_ID" updateConfigRevision = true +workerPools = "workerpool_1" -> "workerpool_1,workerpool_2" ``` In this example we set the `updateConfigRevision` parameter to true in order to update our storage assignment with the latest configuration revision i.e the OcsCluster CRD is updated with the latest changes. @@ -175,6 +178,7 @@ Note this operation deletes the existing configuration and it's respective assig | kmsTokenUrl | The HPCS Token URL | `string` | no | null | ignoreNoobaa | Set to true if you do not want MultiCloudGateway | `bool` | no | false | ocsUpgrade | Set to true to upgrade Ocscluster | `string` | no | false +| workerPools | Provide the names/ID of the workerpool on which to install ODF. Specify either workerpool or worker nodes to select storage nodes. If none of them specified, ODF will install on all workers | `string` | no | null | workerNodes | Provide the names of the worker nodes on which to install ODF. Leave blank to install ODF on all worker nodes | `string` | no | null | encryptionInTransit |To enable in-transit encryption. Enabling in-transit encryption does not affect the existing mapped or mounted volumes. After a volume is mapped/mounted, it retains the encryption settings that were used when it was initially mounted. To change the encryption settings for existing volumes, they must be remounted again one-by-one. | `bool` | no | false | taintNodes | Specify true to taint the selected worker nodes so that only OpenShift Data Foundation pods can run on those nodes. Use this option only if you limit ODF to a subset of nodes in your cluster. | `bool` | no | false @@ -190,5 +194,6 @@ Refer - https://cloud.ibm.com/docs/satellite?topic=satellite-storage-odf-remote& ## Note * Users should only change the values of the variables within quotes, variables should be left untouched with the default values if they are not set. +* `workerPools` takes a string containing comma separated values of the names of the workerpool you wish to enable ODF on. Specify either workerpool or worker nodes to select storage nodes. If none of them specified, ODF will install on all workers * `workerNodes` takes a string containing comma separated values of the names of the worker nodes you wish to enable ODF on. * During ODF Storage Template Update, it is recommended to delete all terraform related assignments before handed, as their lifecycle will be affected, during update new storage assignments are made back internally with new UUIDs. diff --git a/examples/openshift-data-foundation/satellite/odf-remote/4.15/input.tfvars b/examples/openshift-data-foundation/satellite/odf-remote/4.15/input.tfvars index 1f4cf9ea29c..b12d17ef5c4 100644 --- a/examples/openshift-data-foundation/satellite/odf-remote/4.15/input.tfvars +++ b/examples/openshift-data-foundation/satellite/odf-remote/4.15/input.tfvars @@ -2,7 +2,7 @@ ## Please change according to your configuratiom ## -# Common for both storage configuration and assignment +# Common for both storage configuration and assignment ibmcloud_api_key = "" location = "" #Location of your storage configuration and assignment configName = "" #Name of your storage configuration @@ -30,6 +30,7 @@ numOfOsd = "1" ocsUpgrade = "false" osdSize = "512Gi" osdStorageClassName = "ibmc-vpc-block-metro-5iops-tier" +workerPools = null workerNodes = null encryptionInTransit = false disableNoobaaLB = false @@ -56,4 +57,4 @@ updateConfigRevision = false ## NOTE ## # The following variables will cause issues to your storage assignment lifecycle, so please use only with a storage configuration resource. deleteAssignments = false -updateAssignments = false \ No newline at end of file +updateAssignments = false diff --git a/examples/openshift-data-foundation/satellite/odf-remote/4.15/main.tf b/examples/openshift-data-foundation/satellite/odf-remote/4.15/main.tf index 9b0598daee3..169e9aab80b 100644 --- a/examples/openshift-data-foundation/satellite/odf-remote/4.15/main.tf +++ b/examples/openshift-data-foundation/satellite/odf-remote/4.15/main.tf @@ -35,6 +35,7 @@ resource "ibm_satellite_storage_configuration" "storage_configuration" { "perform-cleanup"= var.performCleanup, "disable-noobaa-LB"= var.disableNoobaaLB, "encryption-intransit"= var.encryptionInTransit, + "worker-pools"=var.workerPools, "worker-nodes"= var.workerNodes, "add-single-replica-pool" = var.addSingleReplicaPool, "taint-nodes" = var.taintNodes, diff --git a/examples/openshift-data-foundation/satellite/odf-remote/4.15/variables.tf b/examples/openshift-data-foundation/satellite/odf-remote/4.15/variables.tf index 8c72dd62980..26e263a282f 100644 --- a/examples/openshift-data-foundation/satellite/odf-remote/4.15/variables.tf +++ b/examples/openshift-data-foundation/satellite/odf-remote/4.15/variables.tf @@ -129,7 +129,7 @@ variable "osdStorageClassName" { type = string default = "ibmc-vpc-block-metro-10iops-tier" description = "Enter the storage class to be used to provision block volumes for Object Storage Daemon (OSD) pods." - + } variable "autoDiscoverDevices" { @@ -156,6 +156,12 @@ variable "kmsSecretName" { description = "Please provide the HPCS secret name" } +variable "workerPools" { + type = string + default = null + description = "Provide the names/ID of the workerpool on which to install ODF. Specify either workerpool or worker nodes to select storage nodes. If none of them specified, ODF will install on all workers." +} + variable "workerNodes" { type = string default = null @@ -260,4 +266,4 @@ variable "resourceProfile" { default = "balanced" description = "Provides an option to choose a resource profile based on the availability of resources during deployment. Choose between lean, balanced and performance." -} \ No newline at end of file +} From c74c5060ee6c84cea99501744a4368d3190a1f22 Mon Sep 17 00:00:00 2001 From: gayathrimenath-ibm <82875471+gayathrimenath-ibm@users.noreply.github.com> Date: Tue, 9 Jul 2024 13:58:03 +0530 Subject: [PATCH 3/3] Update examples/openshift-data-foundation/README.md Co-authored-by: derekpoindexter <45565883+derekpoindexter@users.noreply.github.com> --- examples/openshift-data-foundation/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/openshift-data-foundation/README.md b/examples/openshift-data-foundation/README.md index f0c7a9544df..2be7519a2d5 100644 --- a/examples/openshift-data-foundation/README.md +++ b/examples/openshift-data-foundation/README.md @@ -6,7 +6,7 @@ OpenShift Data Foundation is a highly available storage solution that you can us If you'd like to Deploy and Manage the different configurations for ODF on a Red Hat OpenShift Cluster (VPC) head over to the [addon](https://github.com/IBM-Cloud/terraform-provider-ibm/tree/master/examples/openshift-data-foundation/addon) folder. -## Updating or replacing worker nodes that use OpenShift Data Foundation on ROKS VPC +## Updating or replacing worker nodes that use OpenShift Data Foundation on VPC clusters If you'd like to update or replace the different worker nodes with ODF enabled, head over to the [vpc-worker-replace](https://github.com/IBM-Cloud/terraform-provider-ibm/tree/master/examples/openshift-data-foundation/vpc-worker-replace) folder. This inherently covers the worker replace steps of sequential cordon, drain, and replace.