-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about snapshots / backups on file thin #553
Comments
Looks like LINSTOR does not implement the necessary logic for reading the snapshot data and sending to to S3. You could open an issue on the linstor-server project to see if there is any chance of implementing that. You could use "regular" snapshots, i.e. not sending them to remote locations. That should work with the FILE_THIN pool (assuming you are using XFS as backing FS). You could then have a daemon set watching the directory where LINSTOR stores the volume data and uploading the snapshot files automatically to S3. Restores would involve a lot of manual work in that case, but should be doable. |
I'm not sure that this is related to shipping, and despite docs say that: I am getting # k get vs
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
data-volume-snapshot-1 false data-volume piraeus-snapshots snapcontent-49b85501-0ff9-4fdf-8f8a-fa6be78a60f9 2m58s # k describe vs
Name: data-volume-snapshot-1
Namespace: default
Labels: <none>
Annotations: <none>
API Version: snapshot.storage.k8s.io/v1
Kind: VolumeSnapshot
Metadata:
Creation Timestamp: 2024-11-04T12:55:18Z
Finalizers:
snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection
snapshot.storage.kubernetes.io/volumesnapshot-bound-protection
Generation: 1
Resource Version: 115721
UID: 49b85501-0ff9-4fdf-8f8a-fa6be78a60f9
Spec:
Source:
Persistent Volume Claim Name: data-volume
Volume Snapshot Class Name: piraeus-snapshots
Status:
Bound Volume Snapshot Content Name: snapcontent-49b85501-0ff9-4fdf-8f8a-fa6be78a60f9
Error:
Message: Failed to check and update snapshot content: failed to take snapshot of the volume pvc-4aa1fb9b-5735-448d-8f84-6941c1593b21: "rpc error: code = Internal desc = failed to create snapshot: failed to create snapshot: Message: 'Storage driver 'FILE_THIN' does not support snapshots.'; Details: 'Used for storage pool 'pool1' on 'worker-1'.\nResource: pvc-4aa1fb9b-5735-448d-8f84-6941c1593b21, Snapshot: snapshot-49b85501-0ff9-4fdf-8f8a-fa6be78a60f9'; Reports: '[6728C323-00000-000007]'"
Time: 2024-11-04T12:57:27Z
Ready To Use: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreatingSnapshot <invalid> snapshot-controller Waiting for a snapshot default/data-volume-snapshot-1 to be created by the CSI driver. |
The docs could be a bit clearer, but snapshots in FILE_THIN storage pools only work when backed by XFS, not ext4. I'm guessing you are running ext4 as backing filesystem. You should probably consider switching to a "real" storage pool like ZFS or LVM (thin). |
Heh... thank you! |
Hi,
I'm using Talos on intel NUCs, which unfortunately does not currently have a way of using lvm that does not break updates, leaving file thin as the only option.
The piraeus snapshot doc says file thin is supported, but it looks like shipping is not supported for file thin :
I get this when trying to simply send them to an S3 compatible bucket.
What exactly does that imply, where can those snapshot be stored ? Can they be exported in some other way from wherever they get created ?
Is there any other way to remotely backup those volumes (snapshot or not), short of injecting sidecars into every pod with a pvc to manually rsync files over at night ?
Thanks !
The text was updated successfully, but these errors were encountered: