Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

create a bigger size volume from existing data source #258

Closed
wants to merge 2 commits into from

Conversation

Madhu-1
Copy link
Collaborator

@Madhu-1 Madhu-1 commented Mar 13, 2019

if the newly requested volume is cloned from the existing snapshot, we need to check the request volume size is greater then the snapshot/parent-volume or not, if it is greater than the snapshot we need to
resize the requested volume to a new size.

Signed-off-by: Madhu Rajanna madhupr007@gmail.com

Logs after resizing the volume info

  • parent vol info
rbd image 'pvc-21c9a16f-456c-11e9-9814-525400dd375f':
	size 1 GiB in 256 objects
	order 22 (4 MiB objects)
	id: 8eb0643c9869
	block_name_prefix: rbd_data.8eb0643c9869
	format: 2
	features: layering
	op_features: 
	flags: 
	create_timestamp: Wed Mar 13 08:43:53 2019

  • resized volume info
rbd image 'pvc-63f2f902-456d-11e9-9814-525400dd375f':
	size 10 GiB in 2560 objects
	order 22 (4 MiB objects)
	id: 8f0a2ae8944a
	block_name_prefix: rbd_data.8f0a2ae8944a
	format: 2
	features: layering
	op_features: 
	flags: 
	create_timestamp: Wed Mar 13 08:52:54 2019
	parent: rbd/pvc-21c9a16f-456c-11e9-9814-525400dd375f@csi-rbd-pvc-21c9a16f-456c-11e9-9814-525400dd375f-snap-289d7f24-456c-11e9-a34f-0242ac11000e
	overlap: 1 GiB

if the newly requested volume is cloned
from the existing snapshot, we need to
check the request volume size is greater
then the snapshot/parent-volume or not,
if it is greater than the snapshot we need to
resize the requested volume to a new size.

Fixes: ceph#257

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Mar 13, 2019

@rootfs @gman0 PTAL

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Mar 13, 2019

Fixes #257

@rootfs
Copy link
Member

rootfs commented Mar 13, 2019

cc @j-griffith

@rootfs
Copy link
Member

rootfs commented Mar 13, 2019

imo, the snap restore size is totally up to the storage backend, external provisioner does not need to revalidate the size. @xing-yang any input? thanks

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Mar 13, 2019

@j-griffith can you please review this PR?

@@ -240,6 +240,10 @@ func (cs *ControllerServer) checkSnapshot(req *csi.CreateVolumeRequest, rbdVol *
return status.Error(codes.NotFound, err.Error())
}

if (rbdVol.VolSize * util.MiB) < rbdSnap.SizeBytes {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This check is not necessary in the CSI driver because it is already handled by the external-provisioner:
https://github.com/kubernetes-csi/external-provisioner/blob/master/pkg/controller/controller.go#L626

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xing-yang Thanks for the review, the intention of adding this check is am not sure all the CO will do this validation or not, to safeguard the failure I have added this check

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xing-yang is correct, it's not necessary in the Kubernetes context, the only thing I would mentoin however is that it is possible that at some point a different provisioner consumes the plugin. The check is redundant and unnecessary, but at the same time it doesn't really hurt either. Best case, it's never even hit, worst case we have a different CSI Provisioner some day and we miss this check somehow. Not a huge deal, just an observation.

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Mar 13, 2019

imo, the snap restore size is totally up to the storage backend, external provisioner does not need to revalidate the size. @xing-yang any input? thanks

@xing-yang would really happy to get your input on this one.

@xing-yang
Copy link

xing-yang commented Mar 13, 2019

imo, the snap restore size is totally up to the storage backend, external provisioner does not need to revalidate the size. @xing-yang any input? thanks

@xing-yang would really happy to get your input on this one.

@Madhu-1 In the provisioner check, we did say that the CSI plugin needs to handle expansion if volume size is bigger than snapshot size. So it depends on whether the plugin supports resize or not.
https://github.com/kubernetes-csi/external-provisioner/blob/master/pkg/controller/controller.go#L630

@gnufied can you take a look of the resize part?

@j-griffith
Copy link
Contributor

j-griffith commented Mar 13, 2019

#258 (comment)
@Madhu-1 In the provisioner check, we did say that the CSI plugin needs to handle expansion if volume size is bigger than snapshot size. So it depends on whether the plugin supports resize or not.
https://github.com/kubernetes-csi/external-provisioner/blob/master/pkg/controller/controller.go#L630

@gnufied can you take a look of the resize part?

I'd like to see what @gnufied says on this, but my opinion is that we should create the volume and then do the expansion for the user. It looks like we can do the expansion so we should just implement it that way.

@gnufied
Copy link

gnufied commented Mar 13, 2019

@j-griffith Is it enough to resize RBD volume? Is the volume's file system also expanded to accommodate new size or that must happen separately?

And gunfied ?? :D

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Mar 18, 2019

@gnufied this is Volume resize of a cloned volume from the snapshot which is not mounted yet, yeah as this is just a PVC creation is should be enough I think

@j-griffith any comments?

@humblec
Copy link
Collaborator

humblec commented Mar 18, 2019

Let me summarize this effort with a better summary:

First of all, this PR may be better titled to "create a bigger size volume from existing data source' instead of 'resize rbd volume'.

What the code do here is this

*) Clone/ create a volume from an existing source with bigger size and we reach there in 2 steps:

  1. Create a clone volume from existing snapshot.
  2. Increase the size of the volume to the requested size.

Hope this clear some dust. :) @xing-yang @gnufied @j-griffith

As a second thing, the question comes in, ie do we need "fs" expansion while we expand the volume?

My understanding is that, If the volume is mounted and used in pod, the expansion has to be performed on FS level. This need to be considered here @Madhu-1 .

@Madhu-1 , from code review pov, the check of the size can be outside snapshot function and it can be performed for any data source ie volume or snapshot and rest of the code can fall in.

@Madhu-1 Madhu-1 changed the title resize rbd volume create a bigger size volume from existing data source Mar 19, 2019
if the requested volume size is less than
the parent volume we should reject the request
to avoid data loss.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Mar 21, 2019

@humblec updated PR PTAL

@humblec
Copy link
Collaborator

humblec commented Mar 21, 2019

@Madhu-1 can you correct the Travis ?

@gnufied
Copy link

gnufied commented Mar 21, 2019

So, who does file system resizing in this case? The file system resizing still needs to happen and without that, expanded space won't be available to pod/workload. I don't see it being addressed.

Kubelet will not automatically expand the volume...

@humblec
Copy link
Collaborator

humblec commented Mar 21, 2019

So, who does file system resizing in this case? The file system resizing still needs to happen and without that, expanded space won't be available to pod/workload. I don't see it being addressed.

@gnufied there are 2 choices, either we can address both data source (snapshot and volume ) here or address one which is snapshot data source. You are correct that we need fs expansion, but if we are only addressing snapshot data source via this PR, we are still good without fs expansion as snapsource is floating around without being attached to the pod.

@gnufied
Copy link

gnufied commented Mar 21, 2019

@humblec okay but what is the point of expanding the volume here when it won't be expanded when used by a pod? I think, we may have to handle this when a snapshot is restored to a larger volume. This can be done by setting pvc.Status.Capacity to snapshot size(smaller value) and pvc.Spec.Capacity to user requested size(larger value) and pv.Spec.Capacity to user requested value assuming - ceph returns the volume after expansion.

Above condition will automatically trigger expansion on the node. cc @xing-yang

@humblec humblec mentioned this pull request Oct 9, 2019
12 tasks
@humblec humblec mentioned this pull request Jan 22, 2020
14 tasks
@humblec humblec mentioned this pull request Mar 17, 2020
6 tasks
@nixpanic
Copy link
Member

@Madhu-1 please remove this from the v2.1.0 project and set the correct milestone (if any).

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Jul 21, 2020

closing this one as its already addressed in #1244

@Madhu-1 Madhu-1 closed this Jul 21, 2020
nixpanic pushed a commit to nixpanic/ceph-csi that referenced this pull request Mar 12, 2024
Syncing latest changes from devel for ceph-csi
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants