-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IPAM error: failed to open database /run/user/1000/containers/networks/ipam.db #14606
Comments
Does it work when you create the |
@Luap99 The directory exists. Even if I create a dummy empty |
And that definitely also happens as root? |
Apologies, upon further testing (with a minimal repro), I can confirm it does work as root. |
Tracing
Note the error value:
Full output in case its useful: Click to expand
If I delete the I also tried running |
If it is rootless only it is likely related to the mount propagation, check how it looks in |
Inside the unshare, the only entries are these:
Not really sure how I can debug this or what I should be checking, any pointers? Running
|
I compared outputs with a different machine I have access to. I do not use systemd which should explain why the directory is empty, but I am also missing all of |
I reset the podman environment and noticed this when running
According to a comment in containers/buildah#3726 (comment), this might need to be However, running
|
Yes could you set it to share in an init script early in boot, and that should fix your problem. |
@rhatdan Is there anything preventing it from working when making it shared at runtime with |
No, as long as it is executed before the first podman run, it should work as I understand it. You might need to do a --make-rshared ... |
Likely the problem is the use of /var/run... instead of just /run and not the mount propagation. |
@Luap99 In my case, |
Yes I that is what I thought. The problem is that we have to create a new mountsns and make /run and /var/lib/cni writeable because CNI fails otherwise. With netavark we could technically skip this but the issue would still exist for cni users. The setup is very complicated and ugly. Basically the problem is that |
@Luap99 I see. However it does work on the other machine I have access to, which to me sounds like some weird edge-case that's present on my system (and persists across I can't find anything that sticks out in an obvious matter, the setup is nearly identical (including fs structure, with /var/run being a symlink as well). |
check your |
You are correct :) Edit: the following dirty hack confirms:
Shows the correct and expected structure |
This is not likely something podman can fix, so I am going to close the issue. |
We can fix this, a potential easy fix would be to call EvalSymlink on the XDG_RUNTIME_DIR before using it. |
SGTM |
When we bind mount the old XDG_RUNTIME_DIR to the new fake /run it will cause issues when the XDG_RUNTIME_DIR is a symlink since they do not exists in the new path hierarchie. To fix this we can just follow the symlink before we try to use the path. Fixes containers#14606 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When we bind mount the old XDG_RUNTIME_DIR to the new fake /run it will cause issues when the XDG_RUNTIME_DIR is a symlink since they do not exists in the new path hierarchy. To fix this we can just follow the symlink before we try to use the path. Fixes containers#14606 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When we bind mount the old XDG_RUNTIME_DIR to the new fake /run it will cause issues when the XDG_RUNTIME_DIR is a symlink since they do not exists in the new path hierarchy. To fix this we can just follow the symlink before we try to use the path. This fix is kinda ugly, our XDG_RUNTIME_DIR code is all over the place. We should work on consolidating this sooner than later. Fixes containers#14606 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When we bind mount the old XDG_RUNTIME_DIR to the new fake /run it will cause issues when the XDG_RUNTIME_DIR is a symlink since they do not exists in the new path hierarchy. To fix this we can just follow the symlink before we try to use the path. This fix is kinda ugly, our XDG_RUNTIME_DIR code is all over the place. We should work on consolidating this sooner than later. Fixes containers#14606 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
A friendly reminder that this issue had no activity for 30 days. |
@Luap99 What is the state of this? |
A friendly reminder that this issue had no activity for 30 days. |
@Luap99 What is the state of this? |
A friendly reminder that this issue had no activity for 30 days. |
Partial Fix for containers#14606 [NO NEW TESTS NEEDED] Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Partial Fix for containers#14606 [NO NEW TESTS NEEDED] Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Partial Fix for containers#14606 [NO NEW TESTS NEEDED] Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Partial Fix for containers#14606 [NO NEW TESTS NEEDED] Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Partial Fix for containers#14606 [NO NEW TESTS NEEDED] Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Partial Fix for containers#14606 [NO NEW TESTS NEEDED] Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Partial Fix for containers#14606 [NO NEW TESTS NEEDED] Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Partial Fix for containers#14606 [NO NEW TESTS NEEDED] Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Partial Fix for containers#14606 [NO NEW TESTS NEEDED] Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Partial Fix for containers#14606 [NO NEW TESTS NEEDED] Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Partial Fix for containers#14606 [NO NEW TESTS NEEDED] Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Partial Fix for containers#14606 [NO NEW TESTS NEEDED] Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Fixed in #15918 |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Starting a rootless container returns the following error:
The file indeed does not exist:
The issue persists across completely nuking podman with
podman system reset --force
.Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)
Yes
Additional environment details (AWS, VirtualBox, physical, etc.):
Gentoo Linux running kernel
5.16.9
.The text was updated successfully, but these errors were encountered: