-
Notifications
You must be signed in to change notification settings - Fork 555
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
permission denied error even after using correct usernam and key with cephfs #2848
Comments
the problem is with pvc creation not with mounting. please make sure the caps are as per the requirement https://github.com/ceph/ceph-csi/blob/devel/docs/capabilities.md#cephfs |
thanks for the response. we have correct user permissions please check below ceph:~ # ceph auth get client.csi-cephfs-node |
it should be like below ceph auth get client.csi-cephfs-node
[client.csi-cephfs-node]
key = AQCko/dhkjfqEBAAxsbULx1aQl/g7RY9HuNsMA==
caps mds = "allow rw"
caps mgr = "allow rw"
caps mon = "allow r"
caps osd = "allow rw tag cephfs *=*"
exported keyring for client.csi-cephfs-node |
those * are being removed when pasting the output here
|
@kishore438 what is the ceph version you are using? |
@Madhu-1 |
i will see the above user caps has permission to create volume can you try with below caps.
this is similar to #904? |
Created client ceph:~ # ceph auth get client.kubernetes created a secret: Required for dynamically provisioned volumesuserID: kubernetes adminID: kubernetes cephfs-test-stage:/tmp/cephrwx/cephfs/ceph-csi-extras # kubectl delete secret csi-cephfs-secret -n ceph-provisionersecret "csi-cephfs-secret" deleted cephfs-test-stage:/tmp/cephrwx/cephfs/ceph-csi-extras # kubectl get pvc I deleted pvc, storageclass and recreated them but still no luck. I am clueless. |
@kishore438 i have tested this one with different ceph version (it should not matter) with below caps
This is Rook csi troubleshooting guide see this helps https://github.com/rook/rook/blob/master/Documentation/ceph-csi-troubleshooting.md what is the cephcsi version you are using? |
I see "access_mode":{"mode":7} in your logs. Does it make any difference?? How can It be updated?? |
that is the PVC type it's RWX PVC. it's not a problem. |
if possible, could you share your storageclass.yaml, values.yaml, secret(with removed password) yaml files. Is there any way we can troubleshoot from ceph side?? is there any tool or command I can execute with username/password to create a pvc?? |
Hi again, After troubleshooting, We found that permission denied error come only from the pod. If the same command run from the host where minikube is running, the command works. from host vm: cephfs-test-stage:/tmp/cephrwx/cephfs/ceph-csi-extras # ceph -m 172.16.0.1:3300,172.16.0.2:3300,172.16.0.3:3300 --id csi-cephfs-node --key AQCYT/pheeUIJBAAqfczLvmpp0wisx0+Jp+keg== fs ls --format=json [{"name":"shared_fs_storage","metadata_pool":"shared_fs_storage_metadata","metadata_pool_id":12,"data_pool_ids":[11],"data_pools":["shared_fs_storage_data"]}] from the pod: Do you know if we have to do any configuration change from ceph side?? |
Finally found the issue. Ceph is dropping the authentication due to issue with insecure global_id. "attempt to reclaim global_id 155276 without presenting ticket" To fix this either we need to update ceph clients the versions (ex: 15.2.11) or allow insecure connections by enabling "auth_allow_insecure_global_id_reclaim" in ceph(which is not recommended) thank you @Madhu-1 for your support on this issue. |
we are using cephfs csi plugin in kubernetes for pvc provisioning.
I am facing permission denied error even after providing correct usernmame/password.
below are the logs. any leads would be greatly appreciated.
I0202 11:42:30.982150 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"test-pvc-rwx", UID:"1006227d-c084-4634-af01-72a992d5d693", APIVersion:"v1", ResourceVersion:"187182", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/test-pvc-rwx"
I0202 11:42:30.985064 1 connection.go:182] GRPC call: /csi.v1.Controller/CreateVolume
I0202 11:42:30.985273 1 connection.go:183] GRPC request: {"capacity_range":{"required_bytes":20971520},"name":"pvc-1006227d-c084-4634-af01-72a992d5d693","parameters":{"clusterID":"eab5fb46-7eb5-11ec-91ab-d4f5ef46983c","fsName":"shared_fs_storage","volumeNamePrefix":"cs-vol-project-"},"secrets":"stripped","volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["debug"]}},"access_mode":{"mode":5}}]}
I0202 11:42:30.998809 1 connection.go:185] GRPC response: {}
I0202 11:42:30.998902 1 connection.go:186] GRPC error: rpc error: code = InvalidArgument desc = failed to get connection: connecting failed: rados: ret=-13, Permission denied
I0202 11:42:30.998924 1 controller.go:645] CreateVolume failed, supports topology = false, node selected false => may reschedule = false => state = Finished: rpc error: code = InvalidArgument desc = failed to get connection: connecting failed: rados: ret=-13, Permission denied
I0202 11:42:30.998985 1 controller.go:1084] Final error received, removing PVC 1006227d-c084-4634-af01-72a992d5d693 from claims in progress
W0202 11:42:30.998996 1 controller.go:943] Retrying syncing claim "1006227d-c084-4634-af01-72a992d5d693", failure 0
I0202 11:42:30.999260 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"test-pvc-rwx", UID:"1006227d-c084-4634-af01-72a992d5d693", APIVersion:"v1", ResourceVersion:"187182", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "csi-cephfs-sc": rpc error: code = InvalidArgument desc = failed to get connection: connecting failed: rados: ret=-13, Permission denied
authorization for user csi-cephfs-node
ceph:~ # ceph auth get client.csi-cephfs-node
exported keyring for client.csi-cephfs-node
[client.csi-cephfs-node]
key = AQCYT/pheeUIJBAAqfczLvmpp0wisx0+Jp+keg==
caps mds = "allow rw"
caps mgr = "allow rw"
caps mon = "allow r"
caps osd = "allow rw tag cephfs ="
ceph:~ #
mount is successful with the same user
cephfs-test-stage:/tmp/cephrwx/ceph-csi/charts/ceph-csi-cephfs # mount -t ceph 172.16.0.1:6789:/ /mnt/ceph -o name=csi-cephfs-node,secretfile=test.key --verbose
parsing options: rw,name=csi-cephfs-node,secretfile=test.key
cephfs-test-stage:/tmp/cephrwx/ceph-csi/charts/ceph-csi-cephfs # ls /mnt/ceph/
flexilab_clients
cephfs-test-stage:/tmp/cephrwx/ceph-csi/charts/ceph-csi-cephfs #
storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-cephfs-sc
provisioner: cephfs.csi.ceph.com
parameters:
clusterID: eab5fb46-7eb5-11ec-91ab-d4f5ef46983c
fsName: shared_fs_storage
csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-provisioner
csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-provisioner
csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-provisioner
volumeNamePrefix: "cs-vol-project-"
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
secret file:
apiVersion: v1
kind: Secret
metadata:
name: csi-cephfs-secret
namespace: ceph-provisioner
stringData:
userID: csi-cephfs-node
userKey:
adminID: csi-cephfs-node
adminKey:
pods running in ceph-provisioner
cephfs-test-stage:/tmp/cephrwx/cephfs/ceph-csi-extras # kubectl get pods -n ceph-provisioner
NAME READY STATUS RESTARTS AGE
cephfs-provisioner-ceph-csi-cephfs-nodeplugin-cp7s9 3/3 Running 0 72m
cephfs-provisioner-ceph-csi-cephfs-provisioner-865f97f7d9-vnrtr 6/6 Running 0 72m
mount command running inside the pod succesfully
cephfs-test-stage:/tmp/cephrwx/cephfs/ceph-csi-extras # kubectl exec -it cephfs-provisioner-ceph-csi-cephfs-provisioner-865f97f7d9-vnrtr -c csi-cephfsplugin -n ceph-provisioner -- bash
[root@cephfs-provisioner-ceph-csi-cephfs-provisioner-865f97f7d9-vnrtr /]# ls
bin csi dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var
[root@cephfs-provisioner-ceph-csi-cephfs-provisioner-865f97f7d9-vnrtr /]#
[root@cephfs-provisioner-ceph-csi-cephfs-provisioner-865f97f7d9-vnrtr /]#
[root@cephfs-provisioner-ceph-csi-cephfs-provisioner-865f97f7d9-vnrtr /]# echo "" > test.key
[root@cephfs-provisioner-ceph-csi-cephfs-provisioner-865f97f7d9-vnrtr /]# mkdir /mnt/ceph
[root@cephfs-provisioner-ceph-csi-cephfs-provisioner-865f97f7d9-vnrtr /]# mount -t ceph 172.16.0.1:6789:/ /mnt/ceph -o name=csi-cephfs-node,secretfile=test.key --verbose
parsing options: rw,name=csi-cephfs-node,secretfile=test.key
[root@cephfs-provisioner-ceph-csi-cephfs-provisioner-865f97f7d9-vnrtr /]# ls /mnt/ceph/
flexilab_clients
[root@cephfs-provisioner-ceph-csi-cephfs-provisioner-865f97f7d9-vnrtr /]#
please let me know if you need more information.
The text was updated successfully, but these errors were encountered: