Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Block volume support for CSI provisioner #135

Merged
merged 1 commit into from
Nov 26, 2018

Conversation

mkimuram
Copy link
Contributor

Below three logics are implemented to add Block volume support to CSI provisioner:

  1. Add SupportsBlock that properly returns whether Storage
    Provider's plugin supports block (this is checked
    by using ValidateVolumeCapabilities),
  2. Pass BlockVolume instead of MountVolume to CreateVolume
    if volumeMode is set to be Block on Provision,
  3. Set volumeMode to PV returned by Provision.

Also, below 4 test cases for TestSupportsBlock and 2 test cases for TestProvision
are added.

TestSupportsBlock:

  1. ValidateVolumeCapabilities return (true, nil) case: return true expected
  2. ValidateVolumeCapabilities return (false, nil) case: return false expected
  3. ValidateVolumeCapabilities return (true, err) case: return false expected
  4. ValidateVolumeCapabilities return (false, err) case: return false expected

TestProvision:

  1. volumeMode=Filesystem PVC case: return Filesystem PV expected
  2. volumeMode=Block PVC case: return Block PV expected

Fixes #110

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Sep 13, 2018
@mkimuram
Copy link
Contributor Author

@wongma7 @vladimirvivien

This PR is to add Block volume support for CSI provisioner. PTAL

client := csi.NewControllerClient(p.grpcClient)
req := csi.ValidateVolumeCapabilitiesRequest{
VolumeCapabilities: []*csi.VolumeCapability{
volumeCapabilityMountReadWriteOnce,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't seem to use this function anywhere. also I think it should be volumeCapabilityBlockReadWriteOnce

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wongma7

also I think it should be volumeCapabilityBlockReadWriteOnce

Thank you for pointing it out. It's a typo. I fixed it and modified unit code to detect this error.

we don't seem to use this function anywhere.

SupportsBlock is called from library, as we added in the commit, a bit long ago.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yeah, forgot already, my bad :p 👍


controllerServer.EXPECT().ValidateVolumeCapabilities(gomock.Any(), gomock.Any()).Return(&tc.volCapResp, tc.volCapErr).Times(1)

if csiBlockProvisioner, ok := csiProvisioner.(controller.BlockProvisioner); ok {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can skip this if we add a line like
var _ controller.BlockProvisioner = &csiProvisioner{}
to controller.go, to catch the problem at compile time.
Keeping both is okay too

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your advice. I will add it and keep both of them.

@mkimuram
Copy link
Contributor Author

@wongma7

I fixed as suggested. PTAL

Copy link
Member

@vladimirvivien vladimirvivien left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the great effort here. However, there are couple issues that needs addressing before this could work.

},
AccessType: accessType,
})
if util.CheckPersistentVolumeClaimModeBlock(options.PVC) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like the if-block have redundant statements as the else-block.

VolumeCapabilities: []*csi.VolumeCapability{
volumeCapabilityBlockReadWriteOnce,
},
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The call to ValidateVolumeCapabilities requires a volume ID to specify the volume for which the capability query is being made. https://github.com/container-storage-interface/spec/blob/master/spec.md#validatevolumecapabilities

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for pointing this out.

The intention of SupportsBlock is to check supportability of block volume feature for the driver before Provision is called so that provisioner can skip Provision, if the driver doesn't support block volume feature. (Drivers like Ceph, NFS, and hostpath that doesn't provide block volume would need this.) Therefore, if ValidateVolumeCapabilitiesRequest requires a volume ID, it won't be used here, because the existence of volume ID requires existence of volume.

So, the alternatives would be below three:

  1. Just return true for SupportsBlock in all the driver and make Provision fail for block unsupported drivers.
  2. Change CSI spec to allow ValidateVolumeCapabilitiesRequest not specifying volumeID and call ValidateVolumeCapabilitiesRequest here
  3. Add another method in CSI spec to be used for this use case, and call the new method here.

Any thoughts?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like alternative 1. The CSI spec doesn't let you query abstract capabilities, and 2 or 3 require spec changes which are unlikely to happen before the promotion to v1.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bswartz

Thank you for your comment. I will rebase the implementation for alternative 1 and push here, if @vladimirvivien also agrees.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When you rebase, you'll have a conflict with the mount options work I did. I can help fix those conflicts if you like. I'm currently testing this PR in my lab to see if I can get it to work.

@vladimirvivien
Copy link
Member

@mkimuram

Below three logics are implemented to add Block volume support to CSI provisioner:

Add SupportsBlock that properly returns whether Storage Provider's plugin supports block (this is checked
by using ValidateVolumeCapabilities)

Keep in mind that the ValidateVolumeCapabilities call is for a volume capabilities not the Driver capabilities. So that call must include a volume id.

@mkimuram
Copy link
Contributor Author

@vladimirvivien

The call to ValidateVolumeCapabilities requires a volume ID to specify the volume for which the capability query is being made.
So, the alternatives would be below three:

  1. Just return true for SupportsBlock in all the driver and make Provision fail for block unsupported drivers.

I implemented the alternative 1 above in another commit in a separate branch. PTAL

I think that AccessMode unsupported cases are in the same situation to this AccessType unsupported case, so this implementation should be still valid. If the commit makes sense, I will close this PR and make another PR for it.

@mkimuram
Copy link
Contributor Author

mkimuram commented Nov 2, 2018

@vladimirvivien @bswartz

I rebased alternative 1 and pushed to this PR. PTAL

@bswartz
Copy link
Contributor

bswartz commented Nov 6, 2018

I downloaded this patch and tested it with my CSI driver, and it worked.

@bswartz
Copy link
Contributor

bswartz commented Nov 6, 2018

/lgtm

@k8s-ci-robot
Copy link
Contributor

@bswartz: changing LGTM is restricted to assignees, and only kubernetes-csi/external-provisioner repo collaborators may be assigned issues.

In response to this:

/lgtm

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mkimuram
Copy link
Contributor Author

mkimuram commented Nov 6, 2018

@bswartz

Thank you for your testing and sharing the results.

@mkimuram
Copy link
Contributor Author

@bswartz @lpabon @saad-ali

I rebased the commit. PTAL

Now kubernetes/kubernetes#68635 is merged, so we can use k8s master branch for testing. On the other hand, as csi external-provisioner code is updated for csi v1.0.0 spec , csi driver code needs to be updated to meet csi v1.0.0 spec.

I tested with below combination and it is working fine:

  • k8s (master branch)
  • csi-external-provisioner (This PR)
  • csi-attacher and csi-registrar (Versions used in csi e2e tests)
  • csi rbd driver (Only csi rbd driver is updated with a quick hack for testing)

@vladimirvivien
Copy link
Member

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 26, 2018
In CSI provisioner, below three logics need to be implemented,
to add Block volume support to CSI provisioner:

  1. Add SupportsBlock that always return true,
  2. Pass BlockVolume instead of MountVolume to CreateVolume
     if volumeMode is set to be Block on Provision,
  3. Set volumeMode and skip adding FsType to PV returned by Provision,
     if volumeMode is set to be Block on Provision.

Also, below 2 test cases for TestProvision are added.
  1. volumeMode=Filesystem PVC case: return Filesystem PV expected
  2. volumeMode=Block PVC case: return Block PV expected

Fixes kubernetes-csi#110
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 26, 2018
@mkimuram
Copy link
Contributor Author

mkimuram commented Nov 26, 2018

@vladimirvivien

Thank you for your review. I saw a merge conflict and just updated the commit.
PTAL , again.

Note that the conflict was just change in header and fixed like below:

- "github.com/kubernetes-incubator/external-storage/lib/controller"
- "github.com/kubernetes-incubator/external-storage/lib/util"
+ "github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller"
+ "github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/util"

So, no logic change there.

@vladimirvivien
Copy link
Member

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 26, 2018
@vladimirvivien
Copy link
Member

@lpabon @saad-ali @jsafrane please take a look for approval.

@saad-ali
Copy link
Member

/lgtm
/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: mkimuram, saad-ali, vladimirvivien

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 26, 2018
@k8s-ci-robot k8s-ci-robot merged commit 3827f80 into kubernetes-csi:master Nov 26, 2018
@saad-ali
Copy link
Member

Cheery pick to release-1.0 branch so that it can be part of next 1.0.x release

kbsonlong pushed a commit to kbsonlong/external-provisioner that referenced this pull request Dec 29, 2023
Use leader election library from csi-lib-utils
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants