Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doc update for ODF #5454

Merged
merged 3 commits into from
Jul 18, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion examples/openshift-data-foundation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ OpenShift Data Foundation is a highly available storage solution that you can us

If you'd like to Deploy and Manage the different configurations for ODF on a Red Hat OpenShift Cluster (VPC) head over to the [addon](https://github.com/IBM-Cloud/terraform-provider-ibm/tree/master/examples/openshift-data-foundation/addon) folder.

## Updating or replacing VPC worker nodes that use OpenShift Data Foundation
## Updating or replacing worker nodes that use OpenShift Data Foundation on VPC clusters

If you'd like to update or replace the different worker nodes with ODF enabled, head over to the [vpc-worker-replace](https://github.com/IBM-Cloud/terraform-provider-ibm/tree/master/examples/openshift-data-foundation/vpc-worker-replace) folder. This inherently covers the worker replace steps of sequential cordon, drain, and replace.

Expand Down
2 changes: 1 addition & 1 deletion examples/openshift-data-foundation/addon/4.12.0/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ You do not have to change anything in the `ibm-odf-addon` and `ocscluster` folde
To run this example on your Terminal, first download this directory i.e `examples/openshift-data-foundation/`

```bash
$ cd addon
$ cd addon/4.12.0
```

```bash
Expand Down
2 changes: 1 addition & 1 deletion examples/openshift-data-foundation/addon/4.13.0/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ You do not have to change anything in the `ibm-odf-addon` and `ocscluster` folde
To run this example on your Terminal, first download this directory i.e `examples/openshift-data-foundation/`

```bash
$ cd addon
$ cd addon/4.13.0
```

```bash
Expand Down
2 changes: 1 addition & 1 deletion examples/openshift-data-foundation/addon/4.14.0/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ You do not have to change anything in the `ibm-odf-addon` and `ocscluster` folde
To run this example on your Terminal, first download this directory i.e `examples/openshift-data-foundation/`

```bash
$ cd addon
$ cd addon/4.14.0
```

```bash
Expand Down
5 changes: 4 additions & 1 deletion examples/openshift-data-foundation/addon/4.15.0/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ You do not have to change anything in the `ibm-odf-addon` and `ocscluster` folde
To run this example on your Terminal, first download this directory i.e `examples/openshift-data-foundation/`

```bash
$ cd addon
$ cd addon/4.15.0
```

```bash
Expand Down Expand Up @@ -93,6 +93,7 @@ ocsUpgrade = "false"
osdDevicePaths = null
osdSize = "512Gi"
osdStorageClassName = "ibmc-vpc-block-metro-10iops-tier"
workerPools = null
workerNodes = null
encryptionInTransit = false
taintNodes = false
Expand All @@ -111,6 +112,7 @@ The following variables in the `schematics.tfvars` file can be edited

* numOfOsd - To scale your storage
* workerNodes - To increase the number of Worker Nodes with ODF
* workerPools - To increase the number of Storage Nodes by adding more nodes using workerpool

```hcl
# For CRD Management
Expand Down Expand Up @@ -175,6 +177,7 @@ ocsUpgrade = "false" -> "true"
| ignoreNoobaa | Set to true if you do not want MultiCloudGateway | `bool` | no | false
| ocsUpgrade | Set to true to upgrade Ocscluster | `string` | no | false
| osdDevicePaths | IDs of the disks to be used for OSD pods if using local disks or standard classic cluster | `string` | no | null
| workerPools | A list of the worker pools names where you want to deploy ODF. Either specify workerpool or workernodes to deploy ODF, if not specified ODF will deploy on all nodes | `string` | no | null
| workerNodes | Provide the names of the worker nodes on which to install ODF. Leave blank to install ODF on all worker nodes | `string` | no | null
| encryptionInTransit |To enable in-transit encryption. Enabling in-transit encryption does not affect the existing mapped or mounted volumes. After a volume is mapped/mounted, it retains the encryption settings that were used when it was initially mounted. To change the encryption settings for existing volumes, they must be remounted again one-by-one. | `bool` | no | false
| taintNodes | Specify true to taint the selected worker nodes so that only OpenShift Data Foundation pods can run on those nodes. Use this option only if you limit ODF to a subset of nodes in your cluster. | `bool` | no | false
Expand Down
25 changes: 13 additions & 12 deletions examples/openshift-data-foundation/addon/4.15.0/main.tf
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
resource "null_resource" "customResourceGroup" {

provisioner "local-exec" {

when = create
command = "sh ./createcrd.sh"

}

provisioner "local-exec" {

when = destroy
command = "sh ./deletecrd.sh"

Expand All @@ -18,7 +18,7 @@ resource "null_resource" "customResourceGroup" {
null_resource.addOn
]


}


Expand All @@ -35,9 +35,9 @@ resource "null_resource" "addOn" {

when = destroy
command = "sh ./deleteaddon.sh"

}

}


Expand All @@ -47,18 +47,19 @@ resource "null_resource" "updateCRD" {
numOfOsd = var.numOfOsd
ocsUpgrade = var.ocsUpgrade
workerNodes = var.workerNodes
workerPools = var.workerPools
osdDevicePaths = var.osdDevicePaths
taintNodes = var.taintNodes
addSingleReplicaPool = var.addSingleReplicaPool
resourceProfile = var.resourceProfile
enableNFS = var.enableNFS
}


provisioner "local-exec" {

command = "sh ./updatecrd.sh"

}

depends_on = [
Expand All @@ -78,11 +79,11 @@ resource "null_resource" "upgradeODF" {
provisioner "local-exec" {

command = "sh ./updateodf.sh"

}

depends_on = [
null_resource.customResourceGroup, null_resource.addOn
]

}
}
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ resource "kubernetes_manifest" "ocscluster_ocscluster_auto" {
"osdDevicePaths" = var.osdDevicePaths==null ? null : split(",", var.osdDevicePaths),
"osdSize" = var.osdSize,
"osdStorageClassName" = var.osdStorageClassName,
"workerPools" = var.workerPools==null ? null : split(",", var.workerPools),
"workerNodes" = var.workerNodes==null ? null : split(",", var.workerNodes),
"encryptionInTransit" = var.encryptionInTransit,
"taintNodes" = var.taintNodes,
Expand All @@ -65,4 +66,4 @@ resource "kubernetes_manifest" "ocscluster_ocscluster_auto" {
field_manager {
force_conflicts = true
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ ocsUpgrade = "false"
osdDevicePaths = null
osdSize = "512Gi"
osdStorageClassName = "ibmc-vpc-block-metro-10iops-tier"
workerPools = null
workerNodes = null
encryptionInTransit = false
taintNodes = false
Expand All @@ -33,4 +34,4 @@ prepareForDisasterRecovery = false
disableNoobaaLB = false
useCephRBDAsDefaultStorageClass = false
enableNFS = false
resourceProfile = "balanced"
resourceProfile = "balanced"
29 changes: 18 additions & 11 deletions examples/openshift-data-foundation/addon/4.15.0/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ variable "ibmcloud_api_key" {

type = string
description = "IBM Cloud API Key"

}

variable "cluster" {
Expand All @@ -15,23 +15,23 @@ variable "region" {

type = string
description = "Enter Cluster Region"

}

variable "odfVersion" {

type = string
default = "4.15.0"
description = "Provide the ODF Version you wish to install on your cluster"

}

variable "numOfOsd" {

type = string
default = "1"
description = "Number of Osd"

}

variable "osdDevicePaths" {
Expand All @@ -48,7 +48,7 @@ variable "ocsUpgrade" {
default = "false"
description = "Set to true to upgrade Ocscluster"


}

variable "clusterEncryption" {
Expand All @@ -62,14 +62,14 @@ variable "billingType" {
type = string
default = "advanced"
description = "Choose between advanced and essentials"

}

variable "ignoreNoobaa" {
type = bool
default = false
description = "Set to true if you do not want MultiCloudGateway"

}

variable "osdSize" {
Expand All @@ -82,7 +82,7 @@ variable "osdStorageClassName" {
type = string
default = "ibmc-vpc-block-metro-10iops-tier"
description = "Enter the storage class to be used to provision block volumes for Object Storage Daemon (OSD) pods."

}

variable "autoDiscoverDevices" {
Expand All @@ -98,7 +98,7 @@ variable "hpcsEncryption" {
type = string
default = "false"
description = "Set to true to enable HPCS Encryption"

}

variable "hpcsServiceName" {
Expand All @@ -115,6 +115,13 @@ variable "hpcsSecretName" {
description = "Please provide the HPCS secret name"
}

variable "workerPools" {

type = string
default = null
description = "A list of the worker pool names where you want to deploy ODF. Either specify workerpool or workernodes to deploy ODF, if not specified ODF will deploy on all nodes"
}

variable "workerNodes" {

type = string
Expand Down Expand Up @@ -148,7 +155,7 @@ variable "encryptionInTransit" {
type = bool
default = false
description = "Enter true to enable in-transit encryption. Enabling in-transit encryption does not affect the existing mapped or mounted volumes. After a volume is mapped/mounted, it retains the encryption settings that were used when it was initially mounted. To change the encryption settings for existing volumes, they must be remounted again one-by-one."

}

variable "taintNodes" {
Expand Down Expand Up @@ -204,4 +211,4 @@ variable "resourceProfile" {
default = "balanced"
description = "Provides an option to choose a resource profile based on the availability of resources during deployment. Choose between lean, balanced and performance."

}
}
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,11 @@ In this example we set the `updateConfigRevision` parameter to true in order to
You could also use `updateAssignments` to directly update the storage configuration's assignments, but if you have a dependent `storage_assignment` resource, it's lifecycle will be affected. It it recommended to use this parameter when you've only defined the `storage_configuration` resource.

### Upgrade of ODF
**Step 1:**
Follow the [Satellite worker upgrade documentation](https://cloud.ibm.com/docs/satellite?topic=satellite-sat-storage-odf-update&interface=ui) step 1 to step 7 to perform worker upgrade.

**Step 2:**
Follow the below steps to upgrade ODF to next version.
The following variables in the `input.tfvars` file should be changed in order to upgrade the ODF add-on and the Ocscluster CRD.

* storageTemplateVersion - Specify the version you wish to upgrade to
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,11 @@ You could also use `updateAssignments` to directly update the storage configurat

### Upgrade of ODF

**Step 1:**
Follow the [Satellite worker upgrade documentation](https://cloud.ibm.com/docs/satellite?topic=satellite-sat-storage-odf-update&interface=ui) step 1 to step 7 to perform worker upgrade.

**Step 2:**
Follow the below steps to upgrade ODF to next version.
The following variables in the `input.tfvars` file should be changed in order to upgrade the ODF add-on and the Ocscluster CRD.

* storageTemplateVersion - Specify the version you wish to upgrade to
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ https://cloud.ibm.com/docs/schematics?topic=schematics-get-started-terraform
The default input.tfvars is given below, the user should just change the value of the parameters in accorandance to their requirment.

```hcl
# Common for both storage configuration and assignment
# Common for both storage configuration and assignment
ibmcloud_api_key = ""
location = "" #Location of your storage configuration and assignment
configName = "" #Name of your storage configuration
Expand All @@ -66,6 +66,7 @@ ibmCosLocation = null
ignoreNoobaa = false
numOfOsd = "1"
ocsUpgrade = "false"
workerPools = null
workerNodes = null
encryptionInTransit = false
disableNoobaaLB = false
Expand Down Expand Up @@ -103,18 +104,25 @@ The following variables in the `input.tfvars` file can be edited

* numOfOsd - To scale your storage
* workerNodes - To increase the number of Worker Nodes with ODF
* workerPools - To increase the number of Worker Nodes with ODF by including new workerpools

```hcl
numOfOsd = "1" -> "2"
workerNodes = null -> "worker_1_ID,worker_2_ID"
updateConfigRevision = true
workerPools = "workerpool_1" -> "workerpool_1,workerpool_2"
```
In this example we set the `updateConfigRevision` parameter to true in order to update our storage assignment with the latest configuration revision i.e the OcsCluster CRD is updated with the latest changes.

You could also use `updateAssignments` to directly update the storage configuration's assignments, but if you have a dependent `storage_assignment` resource, it's lifecycle will be affected. It it recommended to use this parameter when you've only defined the `storage_configuration` resource.

### Upgrade of ODF

**Step 1:**
Follow the [Satellite worker upgrade documentation](https://cloud.ibm.com/docs/satellite?topic=satellite-sat-storage-odf-update&interface=ui) step 1 to step 7 to perform worker upgrade.

**Step 2:**
Follow the below steps to upgrade ODF to next version.
The following variables in the `input.tfvars` file should be changed in order to upgrade the ODF add-on and the Ocscluster CRD.

* storageTemplateVersion - Specify the version you wish to upgrade to
Expand Down Expand Up @@ -170,7 +178,8 @@ Note this operation deletes the existing configuration and it's respective assig
| ignoreNoobaa | Set to true if you do not want MultiCloudGateway | `bool` | no | false
| ocsUpgrade | Set to true to upgrade Ocscluster | `string` | no | false
| osdDevicePaths | IDs of the disks to be used for OSD pods if using local disks or standard classic cluster | `string` | no | null
| workerNodes | Provide the names of the worker nodes on which to install ODF. Leave blank to install ODF on all worker nodes | `string` | no | null
| workerPools | Provide the names/ID of the workerpool on which to install ODF. Specify either workerpool or worker nodes to select storage nodes. If none of them specified, ODF will install on all workers | `string` | no | null
| workerNodes | Provide the names of the worker nodes on which to install ODF. | `string` | no | null
| encryptionInTransit |To enable in-transit encryption. Enabling in-transit encryption does not affect the existing mapped or mounted volumes. After a volume is mapped/mounted, it retains the encryption settings that were used when it was initially mounted. To change the encryption settings for existing volumes, they must be remounted again one-by-one. | `bool` | no | false
| taintNodes | Specify true to taint the selected worker nodes so that only OpenShift Data Foundation pods can run on those nodes. Use this option only if you limit ODF to a subset of nodes in your cluster. | `bool` | no | false
| addSingleReplicaPool | Specify true to create a single replica pool without data replication, increasing the risk of data loss, data corruption, and potential system instability. | `bool` | no | false
Expand All @@ -185,5 +194,6 @@ Refer - https://cloud.ibm.com/docs/satellite?topic=satellite-storage-odf-local&i
## Note

* Users should only change the values of the variables within quotes, variables should be left untouched with the default values if they are not set.
* `workerPools` takes a string containing comma separated values of the names of the workerpool you wish to enable ODF on. Specify either workerpool or worker nodes to select storage nodes. If none of them specified, ODF will install on all workers
* `workerNodes` takes a string containing comma separated values of the names of the worker nodes you wish to enable ODF on.
* During ODF Storage Template Update, it is recommended to delete all terraform related assignments before handed, as their lifecycle will be affected, during update new storage assignments are made back internally with new UUIDs.
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
## Please change according to your configuratiom ##


# Common for both storage configuration and assignment
# Common for both storage configuration and assignment
ibmcloud_api_key = ""
location = "" #Location of your storage configuration and assignment
configName = "" #Name of your storage configuration
Expand Down Expand Up @@ -30,6 +30,7 @@ ibmCosLocation = null
ignoreNoobaa = false
numOfOsd = "1"
ocsUpgrade = "false"
workerPools = null
workerNodes = null
encryptionInTransit = false
disableNoobaaLB = false
Expand All @@ -56,4 +57,4 @@ updateConfigRevision = false
## NOTE ##
# The following variables will cause issues to your storage assignment lifecycle, so please use only with a storage configuration resource.
deleteAssignments = false
updateAssignments = false
updateAssignments = false
Loading
Loading