-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error encountered when using CAPD on btrfs: Error response from daemon: Duplicate mount point: /dev/mapper
#8317
Comments
/triage accepted I have recently aligned the flags used to create containers between kind and CAPD (#8157), and this included also a change for btrfs |
Hi @fabriziopandini, the same issue occurs on I will test a build that does this and report back. |
Support for btrfs/zfs was upstreamed to kind in kubernetes-sigs/kind#1464, removing the need for us to hack support in ourselves. Helps kubernetes-sigs#8317
Simply removing CAPD's explicit bind/mount worked just fine... Please see the linked PR and tell me where I'm going wrong in my thinking here. I don't think we need to bind/mount anymore now that kind does it for us. |
commented on the PR |
Hit the same issue when deploying a product using CAPD on an OS with the BTRFS filesystem. From the information here and in the PR I figure that just removing the second bind/mount in the code is enough. Is that the case? |
As far as I found in my local testing that fixed the issue. If @cannonpalms is happy with it, it would be good if you could pick up the PR. |
Alright, I will take a stab at the PR when I have some free time in the upcoming days. |
Support for btrfs/zfs was upstreamed to kind in kubernetes-sigs/kind#1464, removing the need for us to hack support in ourselves. Helps kubernetes-sigs#8317
Support for btrfs/zfs was upstreamed to kind in kubernetes-sigs/kind#1464, removing the need for us to hack support in ourselves. Helps kubernetes-sigs#8317 chore: PR feedback Removes the now-unused function mountDevMapper(...) chore: fix ci lint fix: restore missing storage consts
Support for btrfs/zfs was upstreamed to kind in kubernetes-sigs/kind#1464, removing the need for us to hack support in ourselves. Helps kubernetes-sigs#8317 chore: PR feedback Removes the now-unused function mountDevMapper(...) chore: fix ci lint fix: restore missing storage consts
Support for btrfs/zfs was upstreamed to kind in kubernetes-sigs/kind#1464, removing the need for us to hack support in ourselves. Helps kubernetes-sigs#8317 chore: PR feedback Removes the now-unused function mountDevMapper(...) chore: fix ci lint fix: restore missing storage consts chore: fix bad rebase mountDevMapper() is unused
Support for btrfs/zfs was upstreamed to kind in kubernetes-sigs/kind#1464, removing the need for us to hack support in ourselves. Helps kubernetes-sigs#8317 chore: PR feedback Removes the now-unused function mountDevMapper(...) chore: fix ci lint fix: restore missing storage consts chore: fix bad rebase mountDevMapper() is unused
Support for btrfs/zfs was upstreamed to kind in kubernetes-sigs/kind#1464, removing the need for us to hack support in ourselves. Helps kubernetes-sigs#8317 chore: PR feedback Removes the now-unused function mountDevMapper(...) chore: fix ci lint fix: restore missing storage consts chore: fix bad rebase mountDevMapper() is unused
What steps did you take and what happened?
I'm having difficulties running CAPD/the Tilt local development environment. When I deploy a workload cluster, the
DockerCluster
controller immediately errors with the following message:I first reported this issue in a Slack thread where @killianmuldoon kindly helped to identify that the problem is reproducible with the
btrfs
docker storage driver but not with theoverlay2
docker storage driver.This suggests to me that the issue is unlikely to be within CAPD itself, but more likely a lower layer like kind or docker. However, this can serve as a helpful tracking issue for the issue that bubbles to the surface when attempting to use CAPD.
What did you expect to happen?
Expected a workload cluster named
capi-quickstart-cluster
to be created.Cluster API version
v1.3.5
Kubernetes version
1.26
Anything else you would like to add?
No response
Label(s) to be applied
/kind bug
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.
The text was updated successfully, but these errors were encountered: