Skip to content
This repository has been archived by the owner on Jun 29, 2022. It is now read-only.

Commit

Permalink
rook-ceph: Allow user to provide sub-component resources
Browse files Browse the repository at this point in the history
This commit adds a new parameter `resources` for sub-components' cpu and
memory requests and limits.

Signed-off-by: Suraj Deshmukh <suraj@kinvolk.io>
  • Loading branch information
surajssd committed Jun 17, 2021
1 parent 330ce9e commit aafdcca
Show file tree
Hide file tree
Showing 5 changed files with 266 additions and 7 deletions.
41 changes: 34 additions & 7 deletions docs/configuration-reference/components/rook-ceph.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,28 @@ component "rook-ceph" {
default = true
reclaim_policy = "Delete"
}
}
resources {
osd {
requests {
cpu = "5"
memory = "5Gi"
}
limits {
cpu = "5"
memory = "5Gi"
}
}
mon {}
mgr {}
mds {}
prepareosd {}
crashcollector {}
mgr_sidecar {}
}
}
```

The Ceph cluster needs to be deployed in the same namespace as the Rook operator at the moment.
Expand All @@ -65,14 +85,21 @@ Table of all the arguments accepted by the component.
| Argument | Description | Default | Type | Required |
|--------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------:|:---------------------------------------------------------------------------------------------------------------|:--------:|
| `namespace` | Namespace to deploy the Ceph cluster into. Must be the same as the rook operator. | "rook" | string | false |
| `monitor_count` | Number of Ceph monitors to deploy. An odd number like 3 or 5 is recommended which should also be sufficient for most cases. | 1 | number | false |
| `monitor_count` | Number of Ceph monitors to deploy. An odd number like 3 or 5 is recommended which should also be sufficient for most cases. | 1 | number | false |
| `enable_toolbox` | Deploy the [toolbox pod](https://rook.io/docs/rook/master/ceph-toolbox.html) to debug and manage the Ceph cluster. | false | bool | false |
| `node_affinity` | Node affinity for deploying the Ceph cluster pods. | - | list(object({key = string, operator = string, values = list(string)})) | false |
| `toleration` | Tolerations that the Ceph cluster pods will tolerate. | - | list(object({key = string, effect = string, operator = string, value = string, toleration_seconds = string })) | false |
| `metadata_device` | Name of the device to store the metadata on each storage machine. **Note**: Provide just the name of the device and skip prefixing with `/dev/`. | - | string | false |
| `storage_class.enable` | Install Storage Class config. | false | bool | false |
| `storage_class.default` | Make this Storage Class as a default one. | false | bool | false |
| `node_affinity` | Node affinity for deploying the Ceph cluster pods. | - | list(object({key = string, operator = string, values = list(string)})) | false |
| `toleration` | Tolerations that the Ceph cluster pods will tolerate. | - | list(object({key = string, effect = string, operator = string, value = string, toleration_seconds = string })) | false |
| `metadata_device` | Name of the device to store the metadata on each storage machine. **Note**: Provide just the name of the device and skip prefixing with `/dev/`. | - | string | false |
| `storage_class.enable` | Install Storage Class config. | false | bool | false |
| `storage_class.default` | Make this Storage Class as a default one. | false | bool | false |
| `storage_class.reclaim_policy` | Persistent volumes created with this storage class will have this reclaim policy. This field decides what happens to the volume after a user deletes a PVC. Valid values: `Retain`, `Recycle` and `Delete`. Read more in the [Kubernetes docs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming). | `Retain` | string | false |
| `resources.osd` | Resource request and/or limits for OSDs. Read [this doc](https://rook.io/docs/rook/v1.6/ceph-cluster-crd.html#cluster-wide-resources-configuration-settings), to find out the minimum prescribed resource requirements. | - | object(request {cpu = string, memory = string}, limits {cpu = string, memory = string}) | false |
| `resources.mon` | Resource request and/or limits for MONs. Read [this doc](https://rook.io/docs/rook/v1.6/ceph-cluster-crd.html#cluster-wide-resources-configuration-settings), to find out the minimum prescribed resource requirements. | - | object(request {cpu = string, memory = string}, limits {cpu = string, memory = string}) | false |
| `resources.mgr` | Resource request and/or limits for MGRs. Read [this doc](https://rook.io/docs/rook/v1.6/ceph-cluster-crd.html#cluster-wide-resources-configuration-settings), to find out the minimum prescribed resource requirements. | - | object(request {cpu = string, memory = string}, limits {cpu = string, memory = string}) | false |
| `resources.mds` | Resource request and/or limits for MDS. Read [this doc](https://rook.io/docs/rook/v1.6/ceph-cluster-crd.html#cluster-wide-resources-configuration-settings), to find out the minimum prescribed resource requirements. | - | object(request {cpu = string, memory = string}, limits {cpu = string, memory = string}) | false |
| `resources.prepareosd` | Resource request and/or limits for OSD prepare job. Read [this doc](https://rook.io/docs/rook/v1.6/ceph-cluster-crd.html#cluster-wide-resources-configuration-settings), to find out the minimum prescribed resource requirements. | - | object(request {cpu = string, memory = string}, limits {cpu = string, memory = string}) | false |
| `resources.crashcollector` | Resource request and/or limits for Crashcollector. Read [this doc](https://rook.io/docs/rook/v1.6/ceph-cluster-crd.html#cluster-wide-resources-configuration-settings), to find out the minimum prescribed resource requirements. | - | object(request {cpu = string, memory = string}, limits {cpu = string, memory = string}) | false |
| `resources.mgr_sidecar` | Resource request and/or limits for MGR sidecar. Read [this doc](https://rook.io/docs/rook/v1.6/ceph-cluster-crd.html#cluster-wide-resources-configuration-settings), to find out the minimum prescribed resource requirements. | - | object(request {cpu = string, memory = string}, limits {cpu = string, memory = string}) | false |

## Applying

Expand Down
63 changes: 63 additions & 0 deletions pkg/components/rook-ceph/component.go
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,27 @@ type component struct {
TolerationsRaw string
StorageClass *StorageClass `hcl:"storage_class,block"`
EnableToolbox bool `hcl:"enable_toolbox,optional"`

Resources *Resources `hcl:"resources,block"`
}

// Resources struct allows user to specify resource request and limits on the rook-ceph
// sub-components.
type Resources struct {
MON *util.ResourceRequirements `hcl:"mon,block"`
MONRaw string
MGR *util.ResourceRequirements `hcl:"mgr,block"`
MGRRaw string
OSD *util.ResourceRequirements `hcl:"osd,block"`
OSDRaw string
MDS *util.ResourceRequirements `hcl:"mds,block"`
MDSRaw string
PrepareOSD *util.ResourceRequirements `hcl:"prepareosd,block"`
PrepareOSDRaw string
CrashCollector *util.ResourceRequirements `hcl:"crashcollector,block"`
CrashCollectorRaw string
MGRSidecar *util.ResourceRequirements `hcl:"mgr_sidecar,block"`
MGRSidecarRaw string
}

// StorageClass provides struct to enable it or make it default.
Expand Down Expand Up @@ -69,6 +90,44 @@ func (c *component) LoadConfig(configBody *hcl.Body, evalContext *hcl.EvalContex
return gohcl.DecodeBody(*configBody, evalContext, c)
}

func (c *component) addResourceRequirements() error {
if c.Resources == nil {
return nil
}

var err error

if c.Resources.MONRaw, err = util.RenderResourceRequirements(c.Resources.MON); err != nil {
return fmt.Errorf("rendering resources.mon: %w", err)
}

if c.Resources.MGRRaw, err = util.RenderResourceRequirements(c.Resources.MGR); err != nil {
return fmt.Errorf("rendering resources.mgr: %w", err)
}

if c.Resources.OSDRaw, err = util.RenderResourceRequirements(c.Resources.OSD); err != nil {
return fmt.Errorf("rendering resources.osd: %w", err)
}

if c.Resources.MDSRaw, err = util.RenderResourceRequirements(c.Resources.MDS); err != nil {
return fmt.Errorf("rendering resources.mds: %w", err)
}

if c.Resources.PrepareOSDRaw, err = util.RenderResourceRequirements(c.Resources.PrepareOSD); err != nil {
return fmt.Errorf("rendering resources.prepareosd: %w", err)
}

if c.Resources.CrashCollectorRaw, err = util.RenderResourceRequirements(c.Resources.CrashCollector); err != nil {
return fmt.Errorf("rendering resources.crashcollector: %w", err)
}

if c.Resources.MGRSidecarRaw, err = util.RenderResourceRequirements(c.Resources.MGRSidecar); err != nil {
return fmt.Errorf("rendering resources.mgr_sidecar: %w", err)
}

return nil
}

// TODO: Convert to Helm chart.
func (c *component) RenderManifests() (map[string]string, error) {
// Generate YAML for Ceph cluster.
Expand All @@ -78,6 +137,10 @@ func (c *component) RenderManifests() (map[string]string, error) {
return nil, fmt.Errorf("rendering tolerations: %w", err)
}

if err := c.addResourceRequirements(); err != nil {
return nil, fmt.Errorf("rendering resources field: %w", err)
}

ret := make(map[string]string)

// Parse template with values
Expand Down
133 changes: 133 additions & 0 deletions pkg/components/rook-ceph/component_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,139 @@ func TestConversion(t *testing.T) {
expected: "Delete",
fn: testutil.MatchJSONPathStringValue,
},
{
name: "resources_osd_requests_cpu",
inputConfig: `component "rook-ceph" {
resources {
osd {
requests {
cpu = "5"
}
}
}
}`,
expectedManifestName: k8sutil.ObjectMetadata{
Version: "ceph.rook.io/v1", Kind: "CephCluster", Name: "rook-ceph",
},
jsonPath: "{.spec.resources.osd}",
expected: "{\"requests\":{\"cpu\":\"5\"}}",
fn: testutil.MatchJSONPathJSONValue,
},
{
name: "resources_mgr_requests_memory",
inputConfig: `component "rook-ceph" {
resources {
mgr {
requests {
memory = "10Gi"
}
}
}
}`,
expectedManifestName: k8sutil.ObjectMetadata{
Version: "ceph.rook.io/v1", Kind: "CephCluster", Name: "rook-ceph",
},
jsonPath: "{.spec.resources.mgr}",
expected: "{\"requests\":{\"memory\":\"10Gi\"}}",
fn: testutil.MatchJSONPathJSONValue,
},
{
name: "resources_mon_limits_cpu",
inputConfig: `component "rook-ceph" {
resources {
mon {
limits {
cpu = "3"
}
}
}
}`,
expectedManifestName: k8sutil.ObjectMetadata{
Version: "ceph.rook.io/v1", Kind: "CephCluster", Name: "rook-ceph",
},
jsonPath: "{.spec.resources.mon}",
expected: "{\"limits\":{\"cpu\":\"3\"}}",
fn: testutil.MatchJSONPathJSONValue,
},
{
name: "resources_mds_limits_memory",
inputConfig: `component "rook-ceph" {
resources {
mds {
limits {
memory = "2Gi"
}
}
}
}`,
expectedManifestName: k8sutil.ObjectMetadata{
Version: "ceph.rook.io/v1", Kind: "CephCluster", Name: "rook-ceph",
},
jsonPath: "{.spec.resources.mds}",
expected: "{\"limits\":{\"memory\":\"2Gi\"}}",
fn: testutil.MatchJSONPathJSONValue,
},
{
name: "resources_prepareosd_requests",
inputConfig: `component "rook-ceph" {
resources {
prepareosd {
requests {
memory = "9Gi"
cpu = "9"
}
}
}
}`,
expectedManifestName: k8sutil.ObjectMetadata{
Version: "ceph.rook.io/v1", Kind: "CephCluster", Name: "rook-ceph",
},
jsonPath: "{.spec.resources.prepareosd}",
expected: "{\"requests\":{\"cpu\":\"9\",\"memory\":\"9Gi\"}}",
fn: testutil.MatchJSONPathJSONValue,
},
{
name: "resources_crashcollector_limits",
inputConfig: `component "rook-ceph" {
resources {
crashcollector {
limits {
cpu = "7"
memory = "7Gi"
}
}
}
}`,
expectedManifestName: k8sutil.ObjectMetadata{
Version: "ceph.rook.io/v1", Kind: "CephCluster", Name: "rook-ceph",
},
jsonPath: "{.spec.resources.crashcollector}",
expected: "{\"limits\":{\"cpu\":\"7\",\"memory\":\"7Gi\"}}",
fn: testutil.MatchJSONPathJSONValue,
},
{
name: "resources_mgr_sidecar_all",
inputConfig: `component "rook-ceph" {
resources {
mgr_sidecar {
limits {
cpu = "6"
}
requests {
memory = "1Gi"
cpu = "1"
}
}
}
}`,
expectedManifestName: k8sutil.ObjectMetadata{
Version: "ceph.rook.io/v1", Kind: "CephCluster", Name: "rook-ceph",
},
jsonPath: "{.spec.resources.mgr-sidecar}",
expected: "{\"limits\":{\"cpu\":\"6\"},\"requests\":{\"cpu\":\"1\",\"memory\":\"1Gi\"}}",
fn: testutil.MatchJSONPathJSONValue,
},
}

for _, tc := range testCases {
Expand Down
9 changes: 9 additions & 0 deletions pkg/components/rook-ceph/manifests.go
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,16 @@ spec:
tolerations: {{ .TolerationsRaw }}
{{- end }}
annotations:
{{- if .Resources }}
resources:
mon: {{ .Resources.MONRaw }}
mgr: {{ .Resources.MGRRaw }}
osd: {{ .Resources.OSDRaw }}
mds: {{ .Resources.MDSRaw }}
prepareosd: {{ .Resources.PrepareOSDRaw }}
crashcollector: {{ .Resources.CrashCollectorRaw }}
mgr-sidecar: {{ .Resources.MGRSidecarRaw }}
{{- end }}
removeOSDsIfOutAndSafeToRemove: false
storage: # cluster level storage configuration and selection
useAllNodes: true
Expand Down
Loading

0 comments on commit aafdcca

Please sign in to comment.