Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker vs podman emptyDir mount options (exec vs noexec) #1581

Closed
joelddiaz opened this issue May 12, 2020 · 7 comments · Fixed by #1589
Closed

docker vs podman emptyDir mount options (exec vs noexec) #1581

joelddiaz opened this issue May 12, 2020 · 7 comments · Fixed by #1589
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/external upstream bugs

Comments

@joelddiaz
Copy link

What happened:
Running a pod with an emptyDir defined results in 'noexec' when run under Podman, and the 'noexec' is missing when run under Docker.

What you expected to happen:
Same behavior.

How to reproduce it (as minimally and precisely as possible):
Start up kind with both docker and podman, create the namespace 'testexec' and run this pod on both docker and podman environments with kubectl create -f ./<file with pod contents>:

apiVersion: v1
kind: Pod
metadata:
  name: testpod
  namespace: testexec
spec:
  containers:
  - args:
    - --output
    - SOURCE,TARGET,FSTYPE,OPTIONS
    command:
    - /usr/bin/findmnt
    image: fedora:latest
    imagePullPolicy: Always
    name: test
    volumeMounts:
    - mountPath: /output
      name: output
  restartPolicy: Never
  volumes:
  - emptyDir: {}
    name: output

When the Pod has finished running compare the log output showing the /output mountpoint with kubectl logs testpod.

Under Podman:

/dev/mapper/ssd-root[/var/lib/containers/storage/volumes/9fbd845d68a713e0cf8d22d236a22fc3a8062b21dfe471680992148f62beac2b/_data/lib/kubelet/pods/cc549f1d-935b-4f7c-8525-0d022f43302c/volumes/kubernetes.io~empty-dir/output]                                    |-/output                                   ext4    rw,nosuid,nodev,noexec,relatime,seclabel

Under Docker:

/dev/mapper/ssd-root[/var/lib/docker/volumes/8ca5f64dd3e81bfe12742a866a6fa9fc8355fc945a235d28b2af018be89c8d31/_data/lib/kubelet/pods/ea1d98af-68c6-4c60-9677-52e332a331ca/volumes/kubernetes.io~empty-dir/output]                                    |-/output                                   ext4    rw,relatime,seclabel

Anything else we need to know?:

Environment:

  • kind version: (use kind version): kind v0.8.1 go1.14.2 linux/amd64
  • Kubernetes version: (use kubectl version):
Client Version: version.Info{Major:"", Minor:"", GitVersion:"v0.0.0-master+$Format:%h$", GitCommit:"$Format:%H$", GitTreeState:"", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-30T20:19:45Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
  • Docker version: (use docker info):
Client:
 Debug Mode: false

Server:
 Containers: 2
  Running: 2
  Paused: 0
  Stopped: 0
 Images: 7
 Server Version: 19.03.8
 Storage Driver: overlay2
  Backing Filesystem: <unknown>
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 5.6.8-200.fc31.x86_64
 Operating System: Fedora 31 (Workstation Edition)
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 15.51GiB
 Name: minigoomba
 ID: NPKS:IWZ6:BLYQ:H6OT:J3VM:OPSW:FP7S:IT67:VRCM:HMZX:ZJSN:CX25
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  172.17.0.1:5000
  127.0.0.0/8
 Live Restore Enabled: false
  • OS (e.g. from /etc/os-release):
NAME=Fedora
VERSION="31 (Workstation Edition)"

Podman info:

host:
  arch: amd64
  buildahVersion: 1.14.8
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.15-1.fc31.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.15, commit: 4152e6044da92e0c5f246e5adf14c85f41443759'
  cpus: 8
  distribution:
    distribution: fedora
    version: "31"
  eventLogger: journald
  hostname: minigoomba
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.6.8-200.fc31.x86_64
  memFree: 287059968
  memTotal: 16655683584
  ociRuntime:
    name: crun
    package: crun-0.13-2.fc31.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.13
      commit: e79e4de4ac16da0ce48777afb72c6241de870525
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  rootless: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 8368943104
  swapTotal: 8401186816
  uptime: 30h 30m 55.08s (Approximately 1.25 days)
registries:
  127.0.0.1:
    Blocked: false
    Insecure: true
    Location: 127.0.0.1
    MirrorByDigestOnly: false
    Mirrors: []
    Prefix: 127.0.0.1
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 1
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 2
  runRoot: /var/run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
@joelddiaz joelddiaz added the kind/bug Categorizes issue or PR as related to a bug. label May 12, 2020
@joelddiaz
Copy link
Author

FWIW, this originally cropped up in openshift/hive#982

@BenTheElder
Copy link
Member

BenTheElder commented May 13, 2020

That's annoying, I'd certainly have expected these to behave the same ...
EDIT: as a user... podman behaving differently as someone having worked on this is .. not surprising at all :/

cc @amwat @aojea

@BenTheElder
Copy link
Member

what's your podman version?
containers/podman#4318

@BenTheElder BenTheElder added the kind/external upstream bugs label May 13, 2020
@BenTheElder
Copy link
Member

reading the linked issue, it seems podman doesn't match docker in defaulting volume mounts options.

we can explicitly add exec in KIND to work around this, similar to container-job-runner/cjr#49

@joelddiaz
Copy link
Author

what's your podman version?
containers/libpod#4318

Version: 1.9.0
RemoteAPI Version: 1
Go Version: go1.13.9
OS/Arch: linux/amd64

@BenTheElder
Copy link
Member

unfortunately it's not possible to add options like exec to anonymous volumes, so it will be somewhat involved to work around this.

ideally podman should not deviate from docker like this, the marketing of "drop in replacement" has been a disappointing comparison to reality so far. this driver is likely staying experimental for the immediate future.

@BenTheElder
Copy link
Member

@amwat is looking into this further
/assign @amwat

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/external upstream bugs
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants