This repository has been archived by the owner on Jun 29, 2022. It is now read-only.
Rook: Ceph OSDs consume up to 12 GB memory #1476
Labels
area/storage
Issues related to Storage components OpenEBS and Rook Ceph
kind/enhancement
New feature or request
tl;dr: Provide a way for the user where they can limit the memory and CPU of the ceph sub-components like OSD, MGR, MON, etc.
Right now there is no way for a user to specify memory limits on the OSD or any other sub-components of Rook. Since there are no resource limits specified the pod uses all the memory availble on the host.
As you can see that the following OSD deployment has no Kubernetes resources (memory or cpu and limits or request) set. But still the env vars are trying to reference them. So here empty values are being referenced:
One would expect that empty values are populated inside the pod, but if you see the env vars all the limit values are capped at the host limits:
This host has 125 GB memory and 32 cores of CPU.
Above automatically reflects in the OSD config:
osd_memory_target
is the amount of memory that OSD is allowed to expand upto. And it is appropriately set proportional to the host memory limit.The text was updated successfully, but these errors were encountered: