You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
/kind bug
I was trying to diagnose an issue with mountpoint-s3-csi-driver in one of our clusters and got following logs from driver components.
Event for pod, which uses S3 PV: Unable to attach or mount volumes: unmounted volumes=[some-volume], unattached volumes=[], failed to process volumes=[]: timed out waiting for the condition
s3-csi-node pod logs on relevant node: E0307 10:10:12.090636 1 driver.go:130] GRPC error: rpc error: code = Internal desc = Could not mount "preprod-mountpoint-s3-csi-bucket-some-app" at "/var/lib/kubelet/pods/c822f696-e0f1-4927-8c44-68d2c7b3e4a2/volumes/kubernetes.io~csi/some-volume/mount": Mount failed: Failed to start systemd unit, context cancelled output: Error: Timeout after 30 seconds while waiting for mount process to be ready
logs from relevant node: Mar 07 10:12:44 some-node mount-s3[3581292]: [ERROR] mountpoint_s3::cli: timeout after 30 seconds waiting for message from child process
Only when I launched mount-s3 manually with -f argument, I was able to understand what is the issue:
2025-03-07T10:16:22.506270Z ERROR awscrt::channel-bootstrap: id=0x55ad48244550: Connection failed with error_code 1048.
2025-03-07T10:16:22.506280Z ERROR awscrt::http-connection: static: Client connection failed with error 1048 (AWS_IO_SOCKET_TIMEOUT).
2025-03-07T10:16:22.506290Z WARN awscrt::connection-manager: id=0x55ad48496080: Failed to obtain new connection from http layer, error 1048(socket operation timed out.)
Network connectivity issues probably should be visible in csi driver pod logs or in systemd status or system logs.
The text was updated successfully, but these errors were encountered:
Thanks for the report @spynode! As part of #279, we plan to run Mountpoint process in a Pod, which will redirect it logs to Kubernetes. Then you would be able to do kubectl logs -n mounts3 mp-... to get logs from Mountpoint.
/kind bug
I was trying to diagnose an issue with mountpoint-s3-csi-driver in one of our clusters and got following logs from driver components.
Event for pod, which uses S3 PV:
Unable to attach or mount volumes: unmounted volumes=[some-volume], unattached volumes=[], failed to process volumes=[]: timed out waiting for the condition
s3-csi-node pod logs on relevant node:
E0307 10:10:12.090636 1 driver.go:130] GRPC error: rpc error: code = Internal desc = Could not mount "preprod-mountpoint-s3-csi-bucket-some-app" at "/var/lib/kubelet/pods/c822f696-e0f1-4927-8c44-68d2c7b3e4a2/volumes/kubernetes.io~csi/some-volume/mount": Mount failed: Failed to start systemd unit, context cancelled output: Error: Timeout after 30 seconds while waiting for mount process to be ready
logs from relevant node:
Mar 07 10:12:44 some-node mount-s3[3581292]: [ERROR] mountpoint_s3::cli: timeout after 30 seconds waiting for message from child process
Only when I launched mount-s3 manually with -f argument, I was able to understand what is the issue:
Network connectivity issues probably should be visible in csi driver pod logs or in systemd status or system logs.
The text was updated successfully, but these errors were encountered: