Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restoring a checkpoint panics (in pod with infra container) #8026

Closed
lukts30 opened this issue Oct 14, 2020 · 0 comments · Fixed by #8030
Closed

Restoring a checkpoint panics (in pod with infra container) #8026

lukts30 opened this issue Oct 14, 2020 · 0 comments · Fixed by #8030
Assignees
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@lukts30
Copy link

lukts30 commented Oct 14, 2020

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Restoring a checkpoint in a pod panics when the pod was created with infra container enabled (Default: true).

Steps to reproduce the issue:

  1. Run the criu podman example in a pod.
root@debian10pod:~# podman pod create --name test
root@debian10pod:~# podman run -d --pod test busybox /bin/sh -c 'i=0; while true; do echo $i; i=$(expr $i + 1); sleep 1; done'
root@debian10pod:~# podman run -d --pod test busybox /bin/sh -c 'i=0; while true; do echo $i; i=$(expr $i + 1); sleep 1; done'
root@debian10pod:~# podman ps
CONTAINER ID  IMAGE                             COMMAND               CREATED         STATUS             PORTS   NAMES
e4764306e721  docker.io/library/busybox:latest  /bin/sh -c i=0; w...  10 seconds ago  Up 10 seconds ago          fervent_poincare
c9b1ef07799c  docker.io/library/busybox:latest  /bin/sh -c i=0; w...  12 seconds ago  Up 11 seconds ago          affectionate_nightingale
3b283845edd4  k8s.gcr.io/pause:3.2                                    22 seconds ago  Up 12 seconds ago          45b7863f398a-infra
root@debian10pod:~# podman container checkpoint e4764306e721
e4764306e72125f7fd84a3c886904f07473009b072c3e553b9492cd7180f6c28
root@debian10pod:~# podman container restore e4764306e721
panic: runtime error: index out of range [0] with length 0

goroutine 1 [running]:
github.com/containers/podman/libpod.(*Container).restore(0xc000230000, 0x1e4b160, 0xc000130020, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /usr/src/packages/BUILD/src/github.com/containers/podman/libpod/container_internal_linux.go:990 +0x244a
github.com/containers/podman/libpod.(*Container).Restore(0xc000230000, 0x1e4b160, 0xc000130020, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /usr/src/packages/BUILD/src/github.com/containers/podman/libpod/container_api.go:715 +0x174
github.com/containers/podman/pkg/domain/infra/abi.(*ContainerEngine).ContainerRestore(0xc000010c60, 0x1e4b160, 0xc000130020, 0xc00051e4d0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, ...)
        /usr/src/packages/BUILD/src/github.com/containers/podman/pkg/domain/infra/abi/containers.go:541 +0x24e
github.com/containers/podman/cmd/podman/containers.restore(0x2be0000, 0xc00051e4d0, 0x1, 0x1, 0x0, 0x0)
        /usr/src/packages/BUILD/src/github.com/containers/podman/cmd/podman/containers/restore.go:88 +0x1ad
github.com/containers/podman/vendor/github.com/spf13/cobra.(*Command).execute(0x2be0000, 0xc000136030, 0x1, 0x1, 0x2be0000, 0xc000136030)
        /usr/src/packages/BUILD/src/github.com/containers/podman/vendor/github.com/spf13/cobra/command.go:838 +0x453
github.com/containers/podman/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x2bf4040, 0xc000130020, 0x18a4d60, 0x2c9b990)
        /usr/src/packages/BUILD/src/github.com/containers/podman/vendor/github.com/spf13/cobra/command.go:943 +0x317
github.com/containers/podman/vendor/github.com/spf13/cobra.(*Command).Execute(...)
        /usr/src/packages/BUILD/src/github.com/containers/podman/vendor/github.com/spf13/cobra/command.go:883
github.com/containers/podman/vendor/github.com/spf13/cobra.(*Command).ExecuteContext(...)
        /usr/src/packages/BUILD/src/github.com/containers/podman/vendor/github.com/spf13/cobra/command.go:876
main.Execute()
        /usr/src/packages/BUILD/src/github.com/containers/podman/cmd/podman/root.go:86 +0xec
main.main()
        /usr/src/packages/BUILD/src/github.com/containers/podman/cmd/podman/main.go:77 +0x18c

Describe the results you received:
Panics when restoring a container.

Describe the results you expected:
Should not panic. podman pod create --name test --infra=false and everything works perfectly fine.

Output of podman version:

Version:      2.1.1
API Version:  2.0.0
Go Version:   go1.14
Built:        Thu Jan  1 00:00:00 1970
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.16.1
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.20, commit: '
  cpus: 4
  distribution:
    distribution: debian
    version: "10"
  eventLogger: journald
  hostname: debian10pod
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.8.0-0.bpo.2-amd64
  linkmode: dynamic
  memFree: 3895685120
  memTotal: 4128940032
  ociRuntime:
    name: runc
    package: 'runc: /usr/sbin/runc'
    path: /usr/sbin/runc
    version: |-
      runc version 1.0.0~rc6+dfsg1
      commit: 1.0.0~rc6+dfsg1-3
      spec: 1.0.1
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  rootless: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 0
  swapTotal: 0
  uptime: 1m 29.7s
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 5
  runRoot: /var/run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 2.0.0
  Built: 0
  BuiltTime: Thu Jan  1 00:00:00 1970
  GitCommit: ""
  GoVersion: go1.14
  OsArch: linux/amd64
  Version: 2.1.1

Package info (e.g. output of rpm -q podman or apt list podman):

root@debian10pod:~# apt list podman
Listing... Done
podman/unknown,now 2.1.1~2 amd64 [installed]
podman/unknown 2.1.1~2 arm64
podman/unknown 2.1.1~2 armhf
podman/unknown 2.1.1~2 ppc64el

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
Yes

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Oct 14, 2020
@Luap99 Luap99 self-assigned this Oct 15, 2020
@Luap99 Luap99 added the In Progress This issue is actively being worked by the assignee, please do not work on this at this time. label Oct 15, 2020
Luap99 added a commit to Luap99/libpod that referenced this issue Oct 15, 2020
We need to do a length check before we can access the
networkStatus slice by index to prevent a runtime panic.

Fixes containers#8026

Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
Luap99 added a commit to Luap99/libpod that referenced this issue Oct 15, 2020
We need to do a length check before we can access the
networkStatus slice by index to prevent a runtime panic.

Fixes containers#8026

Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
Luap99 added a commit to Luap99/libpod that referenced this issue Oct 15, 2020
We need to do a length check before we can access the
networkStatus slice by index to prevent a runtime panic.

Fixes containers#8026

Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
edsantiago pushed a commit to edsantiago/libpod that referenced this issue Nov 4, 2020
We need to do a length check before we can access the
networkStatus slice by index to prevent a runtime panic.

Fixes containers#8026

Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants