-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NodeStageVolume fails due to value mismatch in v3.0.1 #591
Comments
I also tried:
|
these errors are expected when there are node/kubelet restarts
looks like these are virtual disks. DirectPV recognizes the disks by its hardware properties like wwid, serialnumber, mode, vendor etc. So, generally, it is not recommended for virtual disks where these persistent properties are not reliable. While staging a volume, directpv checks if the drive on which the volume is scheduled is the right one by checking its hardware properties and looking for a mismatch. here, in this case, looks like the vendor property isn't matching. So, directpv denies to mount the volume on that drive. Earlier, the vendorID use to be some of the details on virtual disks support is explained in #580 |
You can refer https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-managing_guest_virtual_machines_with_virsh-attaching_and_updating_a_device_with_virsh to configure the disks with vendorID. Still, not sure if these properties will be wiped out on restarts. you can also check the |
Hi @Praveenrajmani, in these days I am unable check if your suggestion to fix the bug (because meanwhile I manually migrated the PVC on the other nodes), however it's most probably what you said, the other nodes are bare metal with a physical partition, the only one which has a problem is the only one in a VM and with a virtual disk. |
/kind bug
What happened
Some days ago I updated directpv to 3.0.0, which caused some high load problems.
After noticing it I shutdown the whole cluster in order to try to fix it next day (nb: is a home selfhosted cluster, not a work production envirnoment).
So yesterday I updated to 3.0.1 and I als updated all the nodes hosting the infrastructure (in particular I updated kubernetes to 1.22 -> 1.23).
After all these updates and reboots, the loads seemed to be returned back to normal, but then I noticed some pods weren't starting: apparently all volumes on node
kaelk-t495-vm-k8s-worker-1
can't be mounted.However volumes in other nodes are correctly mounted.
Ex:
Sometimes in the errors in the event apperaed 'driver name direct-csi-min-io not found in the list of registered CSI drivers'
These are the logs of the pod generated by the daemonet on the
kaelk-t495-vm-k8s-worker-1
node[...]
Other (maybe) useful info
On the machine there is no xfs drive mounted
The text was updated successfully, but these errors were encountered: