Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

healthcheck never executes when Dockerfile lacks --interval #13912

Closed
nvllsvm opened this issue Apr 18, 2022 · 4 comments · Fixed by #13928
Closed

healthcheck never executes when Dockerfile lacks --interval #13912

nvllsvm opened this issue Apr 18, 2022 · 4 comments · Fixed by #13928
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. remote Problem is in podman-remote

Comments

@nvllsvm
Copy link

nvllsvm commented Apr 18, 2022

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Steps to reproduce the issue:
This image has a healthcheck which touches a file in /healthchecks each execution. The filename is the current epoch time so files should accumulate overtime.

  1. create a new directory with a single file named Dockerfile containing:
FROM alpine
RUN mkdir /healthchecks
HEALTHCHECK CMD touch /healthchecks/"$(date +%s)"
CMD ["sleep", "999999999999"]
  1. cd to the new directory and run podman build . -t dev --format docker
  2. Run the container with podman run -d --name test --rm dev
  3. Wait 60 seconds
  4. Run podman ps and notice that the STATUS is still starting.
  5. Run podman exec test ls /healthchecks and notice there are no files.

Describe the results you expected:
Since the Dockerfile lacks an explicit --interval for the healthcheck, I expect the healthcheck to execute every 30 seconds per the default mentioned at https://docs.docker.com/engine/reference/builder/#healthcheck .

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

podman version 4.0.3

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.24.1
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.0-2.fc35.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.0, commit: '
  cpus: 8
  distribution:
    distribution: fedora
    variant: coreos
    version: "35"
  eventLogger: journald
  hostname: localhost.localdomain
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 2252
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
    uidmap:
    - container_id: 0
      host_id: 2252
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
  kernel: 5.15.18-200.fc35.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 841416704
  memTotal: 8324083712
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun-1.4.2-1.fc35.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.4.2
      commit: f6fbc8f840df1a414f31a60953ae514fa497c748
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/2252/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.12-2.fc35.x86_64
    version: |-
      slirp4netns version 1.1.12
      commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.3
  swapFree: 0
  swapTotal: 0
  uptime: 45h 36m 37.56s (Approximately 1.88 days)
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /var/home/core/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/core/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 50
  runRoot: /run/user/2252/containers
  volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
  APIVersion: 4.0.2
  Built: 1646319416
  BuiltTime: Thu Mar  3 09:56:56 2022
  GitCommit: ""
  GoVersion: go1.16.14
  OsArch: linux/amd64
  Version: 4.0.2

Package info (e.g. output of rpm -q podman or apt list podman):

$ brew info podman
podman: stable 4.0.3 (bottled), HEAD
Tool for managing OCI containers and pods
https://podman.io/
/usr/local/Cellar/podman/4.0.3 (172 files, 47.6MB) *
  Poured from bottle on 2022-04-16 at 17:38:38
From: https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/podman.rb
License: Apache-2.0
==> Dependencies
Build: go ✔, go-md2man ✘
Required: qemu ✔
==> Options
--HEAD
        Install HEAD version
==> Caveats
zsh completions have been installed to:
  /usr/local/share/zsh/site-functions

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

Yes - I have also replicated this on Arch Linux running a build of master commit d6f47e6

Additional environment details (AWS, VirtualBox, physical, etc.):

  • macOS
  • Arch Linux
@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Apr 18, 2022
@github-actions github-actions bot added the remote Problem is in podman-remote label Apr 18, 2022
flouthoc added a commit to flouthoc/imagebuilder that referenced this issue Apr 19, 2022
Set appropriate defaults from interval, timeout and retries when
processing a Containerfile with build format as docker.

See: https://docs.docker.com/engine/reference/builder/#healthcheck
Closes: containers/podman#13912

Signed-off-by: Aditya R <arajan@redhat.com>
@flouthoc
Copy link
Collaborator

Hi @nvllsvm , Thanks for creating the issue this is something which i think must be fixed at imagebuilder so this will be fixed by openshift/imagebuilder#225

@nvllsvm
Copy link
Author

nvllsvm commented Apr 19, 2022

@flouthoc Is that PR specific to podman build? This issue also is present when pulling an image built with docker build.

@flouthoc
Copy link
Collaborator

@nvllsvm I have verified your use-case where the image is built with podman build so above PR fixes the issue. But we can set these default at podman level so it works for arbitrary builds. Thanks for mentioning this point.

@flouthoc
Copy link
Collaborator

@nvllsvm Above PR should close it. I don't know why it is documented here as Dockerfile default but actually configured at manager level https://docs.docker.com/engine/reference/builder/#healthcheck

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 20, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 20, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. remote Problem is in podman-remote
Projects
None yet
2 participants