Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman run fails to run when using named volume #3952

Closed
weeping-somnambulist opened this issue Sep 5, 2019 · 17 comments · Fixed by #3954
Closed

Podman run fails to run when using named volume #3952

weeping-somnambulist opened this issue Sep 5, 2019 · 17 comments · Fixed by #3954
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@weeping-somnambulist
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description
Podman runtime error when attempting to run container with named volume. Does not seem to matter if podman run is creating the volume or podman volume create. Does not matter whether container is a custom build or remote image.

Steps to reproduce the issue:

  1. Run any image, specifying a named volume, e.g. podman run -it --rm -v test:/test hello-world

  2. Get error output

  3. Be sad.

Describe the results you received:

root@rocinante:~ # podman run -it --rm -v test:/test hello-world
Trying to pull docker.io/library/hello-world...
Getting image source signatures
Copying blob 1b930d010525 done
Copying config fce289e99e done
Writing manifest to image destination
Storing signatures
Error: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/containers/storage/btrfs-containers/a261adb857d24bc7326cec1fa0d67c74465e86b1b0ecb1d59a6d2a0e2c74f9f4/userdata/volumes/test/_data\\\" to rootfs \\\"/var/lib/containers/storage/btrfs/subvolumes/36a8a9d4e1c523dba11a9a89f599d06b8689ac1ab0508f48edd8e0f205251b3a\\\" at \\\"/test\\\" caused \\\"stat /var/lib/containers/storage/btrfs-containers/a261adb857d24bc7326cec1fa0d67c74465e86b1b0ecb1d59a6d2a0e2c74f9f4/userdata/volumes/test/_data: no such file or directory\\\"\"": OCI runtime error

Describe the results you expected:
Container to mount with persistent named volume attached at specified mount point, world domination ensues shortly thereafter...

Additional information you deem important (e.g. issue happens only occasionally):
Consistent behavior every time on this machine.

Output of podman version:

podman version 1.5.0

Output of podman info --debug:

root@rocinante:~ # podman info --debug 
debug:
  compiler: gc
  git commit: ""
  go version: go1.12.9
  podman version: 1.5.0
host:
  BuildahVersion: 1.10.1
  Conmon:
    package: conmon-2.0.0-2.1.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.0, commit: unknown'
  Distribution:
    distribution: '"opensuse-tumbleweed"'
    version: "20190829"
  MemFree: 3336896512
  MemTotal: 8254947328
  OCIRuntime:
    package: runc-1.0.0~rc8-1.4.x86_64
    path: /usr/sbin/runc
    version: |-
      runc version 1.0.0-rc8
      spec: 1.0.1-dev
  SwapFree: 2147483648
  SwapTotal: 2147483648
  arch: amd64
  cpus: 4
  eventlogger: file
  hostname: rocinante
  kernel: 5.2.10-1-default
  os: linux
  rootless: false
  uptime: 1h 10m 11.21s (Approximately 0.04 days)
registries:
  blocked: null
  insecure: null
  search:
  - docker.io
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 1
  GraphDriverName: btrfs
  GraphOptions: null
  GraphRoot: /var/lib/containers/storage
  GraphStatus:
    Build Version: 'Btrfs v5.2.1 '
    Library Version: "102"
  ImageStore:
    number: 21
  RunRoot: /var/run/containers/storage
  VolumePath: volumes

Package info (e.g. output of rpm -q podman or apt list podman):

root@rocinante:~ # rpm -q podman
podman-1.5.0-1.1.x86_64

Additional environment details (AWS, VirtualBox, physical, etc.):
Lenovo Thinkpad Flex 15, bare metal install

Thanks in advance!

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Sep 5, 2019
@mheon
Copy link
Member

mheon commented Sep 5, 2019

Can you add --log-level=debug to your Podman command and give the output of that?

@weeping-somnambulist
Copy link
Author

Thanks for the quick response.

root@rocinante:~ # podman volume ls 
root@rocinante:~ # podman run --log-level=debug -it --rm -v test:/test hello-world
DEBU[0000] using conmon from $PATH: "/usr/bin/conmon"   
DEBU[0000] Initializing boltdb state at libpod/bolt_state.db 
DEBU[0000] Using graph driver                           
DEBU[0000] Using graph root                             
DEBU[0000] Using run root                               
DEBU[0000] Using static dir libpod                      
DEBU[0000] Using tmp dir /var/run/libpod                
DEBU[0000] Using volume path volumes                    
DEBU[0000] Set libpod namespace to ""                   
INFO[0000] [graphdriver] using prior storage driver: btrfs 
DEBU[0000] Initializing event backend file              
DEBU[0000] using runtime "runc" from $PATH: "/usr/sbin/runc" 
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist 
DEBU[0000] parsed reference into "[btrfs@/var/lib/containers/storage+/var/run/containers/storage]docker.io/library/hello-world:latest" 
DEBU[0000] parsed reference into "[btrfs@/var/lib/containers/storage+/var/run/containers/storage]@fce289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e" 
DEBU[0000] exporting opaque data as blob "sha256:fce289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e" 
DEBU[0000] parsed reference into "[btrfs@/var/lib/containers/storage+/var/run/containers/storage]@fce289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e" 
DEBU[0000] exporting opaque data as blob "sha256:fce289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e" 
DEBU[0000] parsed reference into "[btrfs@/var/lib/containers/storage+/var/run/containers/storage]@fce289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e" 
DEBU[0000] User mount test:/test options []             
DEBU[0000] Got mounts: []                               
DEBU[0000] Got volumes: [0xc0005307c0]                  
DEBU[0000] No hostname set; container's hostname will default to runtime default 
DEBU[0000] Using bridge netmode                         
DEBU[0000] created OCI spec and options for new container 
DEBU[0000] Allocated lock 1 for container ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503 
DEBU[0000] parsed reference into "[btrfs@/var/lib/containers/storage+/var/run/containers/storage]@fce289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e" 
DEBU[0000] exporting opaque data as blob "sha256:fce289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e" 
DEBU[0000] created container "ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503" 
DEBU[0000] container "ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503" has work directory "/var/lib/containers/storage/btrfs-containers/ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503/userdata" 
DEBU[0000] container "ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503" has run directory "/var/run/containers/storage/btrfs-containers/ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503/userdata" 
DEBU[0000] Creating new volume test for container       
DEBU[0000] mounted container "ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503" at "/var/lib/containers/storage/btrfs/subvolumes/ad468095d973e655aa182e9daa45fd252102cae935c614a9547996aa7bca1b22" 
DEBU[0000] New container created "ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503" 
DEBU[0000] container "ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503" has CgroupParent "machine.slice/libpod-ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503.scope" 
DEBU[0000] Handling terminal attach                     
DEBU[0000] Made network namespace at /var/run/netns/cni-0fd94389-6dc5-1f5a-8a80-c9730712d688 for container ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503 
INFO[0000] Got pod network &{Name:angry_mestorf Namespace:angry_mestorf ID:ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503 NetNS:/var/run/netns/cni-0fd94389-6dc5-1f5a-8a80-c9730712d688 Networks:[] RuntimeConfig:map[podman:{IP: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]} 
INFO[0000] About to add CNI network cni-loopback (type=loopback) 
DEBU[0000] mounted container "ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503" at "/var/lib/containers/storage/btrfs/subvolumes/ad468095d973e655aa182e9daa45fd252102cae935c614a9547996aa7bca1b22" 
DEBU[0000] Created root filesystem for container ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503 at /var/lib/containers/storage/btrfs/subvolumes/ad468095d973e655aa182e9daa45fd252102cae935c614a9547996aa7bca1b22 
INFO[0000] Got pod network &{Name:angry_mestorf Namespace:angry_mestorf ID:ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503 NetNS:/var/run/netns/cni-0fd94389-6dc5-1f5a-8a80-c9730712d688 Networks:[] RuntimeConfig:map[podman:{IP: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]} 
INFO[0000] About to add CNI network podman (type=bridge) 
DEBU[0000] [0] CNI result: Interfaces:[{Name:cni0 Mac:f2:2c:17:1a:0f:8e Sandbox:} {Name:vethce2350d5 Mac:62:93:ed:c7:75:ca Sandbox:} {Name:eth0 Mac:26:cc:f7:b4:26:1d Sandbox:/var/run/netns/cni-0fd94389-6dc5-1f5a-8a80-c9730712d688}], IP:[{Version:4 Interface:0xc000526ca0 Address:{IP:10.88.0.100 Mask:ffff0000} Gateway:10.88.0.1}], Routes:[{Dst:{IP:0.0.0.0 Mask:00000000} GW:<nil>}], DNS:{Nameservers:[] Domain: Search:[] Options:[]} 
INFO[0000] AppAmor profile "libpod-default-1.5.0" is already loaded 
DEBU[0000] skipping unrecognized mount in /etc/containers/mounts.conf: "# This configuration file specifies the default mounts for each container of the" 
DEBU[0000] skipping unrecognized mount in /etc/containers/mounts.conf: "# tools adhering to this file (e.g., CRI-O, Podman, Buildah).  The format of the" 
DEBU[0000] skipping unrecognized mount in /etc/containers/mounts.conf: "# config is /SRC:/DST, one mount per line." 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret 
DEBU[0000] Setting CGroups for container ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503 to machine.slice:libpod:ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503 
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] reading hooks from /etc/containers/oci/hooks.d 
DEBU[0000] Created OCI spec for container ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503 at /var/lib/containers/storage/btrfs-containers/ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -s -c ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503 -u ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503 -r /usr/sbin/runc -b /var/lib/containers/storage/btrfs-containers/ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503/userdata -p /var/run/containers/storage/btrfs-containers/ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503/userdata/pidfile -l k8s-file:/var/lib/containers/storage/btrfs-containers/ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503/userdata/ctr.log --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket --log-level debug --syslog -t --conmon-pidfile /var/run/containers/storage/btrfs-containers/ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg  --exit-command-arg --runroot --exit-command-arg  --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503]"
INFO[0000] Running conmon under slice machine.slice and unitName libpod-conmon-ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503.scope 
DEBU[0000] Received: -1                                 
DEBU[0000] Cleaning up container ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503 
DEBU[0000] Tearing down network namespace at /var/run/netns/cni-0fd94389-6dc5-1f5a-8a80-c9730712d688 for container ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503 
INFO[0000] Got pod network &{Name:angry_mestorf Namespace:angry_mestorf ID:ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503 NetNS:/var/run/netns/cni-0fd94389-6dc5-1f5a-8a80-c9730712d688 Networks:[] RuntimeConfig:map[podman:{IP: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]} 
INFO[0000] About to del CNI network podman (type=bridge) 
DEBU[0000] unmounted container "ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503" 
DEBU[0000] Cleaning up container ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503 
DEBU[0000] Network is already cleaned up, skipping...   
DEBU[0000] Container ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503 storage is already unmounted, skipping... 
DEBU[0000] Container ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503 storage is already unmounted, skipping... 
ERRO[0000] container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/containers/storage/btrfs-containers/ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503/userdata/volumes/test/_data\\\" to rootfs \\\"/var/lib/containers/storage/btrfs/subvolumes/ad468095d973e655aa182e9daa45fd252102cae935c614a9547996aa7bca1b22\\\" at \\\"/test\\\" caused \\\"stat /var/lib/containers/storage/btrfs-containers/ff45ebc2cf8d1e8012710781e702dcc630fd3950fe03c745d60a058d259d8503/userdata/volumes/test/_data: no such file or directory\\\"\"": OCI runtime error 

@weeping-somnambulist
Copy link
Author

This might also be helpful:

root@rocinante:~ # podman volume inspect test
[
    {
        "name": "test",
        "labels": {},
        "mountPoint": "volumes/test/_data",
        "driver": "local",
        "options": {},
        "scope": "local"
    }
]

@weeping-somnambulist
Copy link
Author

This also, sorry...

root@rocinante:~ # ls /var/lib/containers/storage/btrfs-containers/
3867960a6be22dd632ef4c2479f75d1724b49e5a516ee454554a18da9b80a589  containers.json  containers.lock
root@rocinante:~ # cat /var/lib/containers/storage/btrfs-containers/containers.json 
[{"id":"3867960a6be22dd632ef4c2479f75d1724b49e5a516ee454554a18da9b80a589","names":["brave-browser"],"image":"19f4cbb5838c41f3cd3e8ba7582cd06bd9b71d4117d18716b018f40bbba7210c","layer":"bb25fbb02de0d1e47f29221e2e0cd5ecaca024d37d1a47ceb32b6eff07829b40","metadata":"{\"image-name\":\"localhost/brave-base-image:latest\",\"image-id\":\"19f4cbb5838c41f3cd3e8ba7582cd06bd9b71d4117d18716b018f40bbba7210c\",\"name\":\"brave-browser\",\"created-at\":1567708982}","created":"2019-09-05T18:43:02.744263842Z","flags":{"MountLabel":"","ProcessLabel":""}}]
root@rocinante:~ # cat /var/lib/containers/storage/btrfs-containers/containers.lock
64a15e2470ee1adeb5be447fec79215f8575b9e10949ed9a8c654dbd000533ad

@mheon
Copy link
Member

mheon commented Sep 5, 2019

DEBU[0000] Creating new volume test for container

Alright, we are definitely calling the create volume code. The container is being created and mounted successfully, but runc is failing claiming the directory for the volume in question doesn't exist...

Wait, why are we looking in the specific container's userdata for the volume? The root for those should be /var/lib/containers/storage/volumes not within the container directories.

@mheon
Copy link
Member

mheon commented Sep 5, 2019

VolumePath: volumes

That from your podman info seems very suspect. Do you have a /etc/containers/libpod.conf or /usr/share/libpod.conf? Can you pastebin them if so?

@mheon
Copy link
Member

mheon commented Sep 5, 2019

DEBU[0000] Using volume path volumes

We are definitely using a bad volume path here. Two issues here:

  • How did we get the bad path? It's generated by a filepath.Join() on GraphRoot, which must be empty - but clearly is not by the time we actually start containers, because those paths all make sense.
  • We didn't detect and handle the obviously-incorrect volume path when we initially created it. This one is easy to fix.

@mheon
Copy link
Member

mheon commented Sep 5, 2019

Eeek. Our static dir is also being set to a relative path. I'm honestly amazed things aren't exploding much earlier.

@weeping-somnambulist
Copy link
Author

weeping-somnambulist commented Sep 5, 2019

/etc/containers/libpod.conf. No /usr/share/libpod.conf on my system. Thanks again, and let me know what else you need.

@mheon
Copy link
Member

mheon commented Sep 5, 2019

My suspicion here is that we have a storage.conf explicitly setting empty-string paths for RunRoot and GraphRoot. The c/storage library is probably using compiled-in defaults at that point, I imagine? Meanwhile, our paths intended to be relative to c/storage are instead ending up in wildly inconsistent places because we don't know where c/storage lives.

@rhatdan @giuseppe How do you think we should handle this case? storage.DefaultStoreOptions() is giving us a bunch of unusable, empty-string paths, so I don't think we can ask c/storage where it put things. Should we have separate defaults that we use when we can't grab paths from storage?

@rhatdan
Copy link
Member

rhatdan commented Sep 5, 2019

We should hard code the defaults if these are set to "".

@mheon
Copy link
Member

mheon commented Sep 5, 2019

Ack. Easy enough.

mheon added a commit to mheon/libpod that referenced this issue Sep 5, 2019
If c/storage paths are explicitly set to "" (the empty string) it
will use compiled-in defaults. However, it won't tell us this via
`storage.GetDefaultStoreOptions()` - we just get the empty string
(which can put our defaults, some of which are relative to
c/storage, in a bad spot).

Hardcode a sane default for cases like this. Furthermore, add
some sanity checks to paths, to ensure we don't use relative
paths for core parts of libpod.

Fixes containers#3952

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
@weeping-somnambulist
Copy link
Author

Issue also applies to 1.5.1 FYI.

@mheon
Copy link
Member

mheon commented Sep 5, 2019

Fix pending in #3954 - however, it will only apply to new installs. It changes the locations of several core paths by default and could break existing installations if we applied it unconditionally.

You'd have to delete /var/lib/containers/storage/ to pick up the fix, but this will lose all your existing containers.

@weeping-somnambulist
Copy link
Author

You were right about storage.conf, here's the pastebin. I forgot about that until you mentioned it @mheon, that was done while troubleshooting a different issue for Buildah.

mheon added a commit to mheon/libpod that referenced this issue Sep 5, 2019
If c/storage paths are explicitly set to "" (the empty string) it
will use compiled-in defaults. However, it won't tell us this via
`storage.GetDefaultStoreOptions()` - we just get the empty string
(which can put our defaults, some of which are relative to
c/storage, in a bad spot).

Hardcode a sane default for cases like this. Furthermore, add
some sanity checks to paths, to ensure we don't use relative
paths for core parts of libpod.

Fixes containers#3952

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
@weeping-somnambulist
Copy link
Author

weeping-somnambulist commented Sep 5, 2019

Appears that the volume path was indeed the issue. Re-installed from distro packages with all configuration files removed, and after re-creating policy.json with defaults and registries.conf from defaults this is working.

I'm closing this issue as it appears to be resolved to me. If you're troubleshooting and find this issue, ensure that the storage.DefaultStoreOptions() in /etc/containers/storage.conf is correct.

@TomSweeneyRedHat
Copy link
Member

Great and useful summation @SwitchedToGitlab, thanks for adding it.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 23, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 23, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants