Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hitting "Error committing the finished image" "no space left on device" due to nonoptimized use of disk #3846

Closed
oblitum opened this issue Aug 18, 2019 · 9 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@oblitum
Copy link

oblitum commented Aug 18, 2019

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Steps to reproduce the issue:

  1. Have 50GB left on disk

  2. Have a Dockerfile whose second step will take 7.5GB of data, like installing many programs

  3. Have additional steps (around >5 more steps) that won't require space at all

Describe the results you received:

I failed to build an image many times that I had no issues building with docker. I noticed that at each try podman took too long to pick each cached step after the second, while docker did it instantly. I realized that for each time podman was expending time on a step picking cache, it was producing a copy of around 7.5GB on ~/.local/share/containers/storage/vfs/dir, for all the steps following the second, even though those steps didn't produce data, only the second did. It looks like podman is simply copying the second step at all following steps. There's no space left on disk to finish this operation, so it reaches "no space left on device".

Describe the results you expected:

I was hopping for podman to work like docker and not require 10x in disk size over the size of final image to produce it, nor was I expecting that trivial steps would be slow to finish because of the previous.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

podman version 1.5.1

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.12.8
  podman version: 1.5.1
host:
  BuildahVersion: 1.10.1
  Conmon:
    package: Unknown
    path: /usr/bin/conmon
    version: 'conmon version 2.0.0, commit: e217fdff82e0b1a6184a28c43043a4065083407f'
  Distribution:
    distribution: arch
    version: unknown
  MemFree: 6003105792
  MemTotal: 16747802624
  OCIRuntime:
    package: Unknown
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc8
      commit: 425e105d5a03fabd737a126ad93d62a9eeede87f
      spec: 1.0.1-dev
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 8
  eventlogger: journald
  hostname: leibniz
  kernel: 5.2.9-arch1-1-ARCH
  os: linux
  rootless: true
  uptime: 42m 51.56s
registries:
  blocked: null
  insecure: null
  search:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ConfigFile: /home/francisco/.config/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: vfs
  GraphOptions: null
  GraphRoot: /home/francisco/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 9
  RunRoot: /run/user/1000
  VolumePath: /home/francisco/.local/share/containers/storage/volumes
@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Aug 18, 2019
@mheon
Copy link
Member

mheon commented Aug 18, 2019 via email

@oblitum
Copy link
Author

oblitum commented Aug 18, 2019

@mheon I'm newbie to podman, where should I configure that?

@oblitum
Copy link
Author

oblitum commented Aug 18, 2019

Is it enough if I just install fuse-overlayfs?

@baude
Copy link
Member

baude commented Aug 18, 2019

Does your ~/.config/containers/storage.conf look like this?

[storage]
  driver = "overlay"
  runroot = "/run/user/1000"
  graphroot = "/home/bbaude/.local/share/containers/storage"
  [storage.options]
    mount_program = "/usr/bin/fuse-overlayfs"

Btw, if you make an adjustment, you may need to remove your current storage ~/.local/share/containers/storage to make the change

@oblitum
Copy link
Author

oblitum commented Aug 18, 2019

@baude thanks for the tip. After installing fuse-overlayfs, it was still using vfs as driver. I've started anew by removing ~/.config/containers/ and ~/.local/share/containers/. The storage.conf is now using "overlay".

@ConorSheehan1
Copy link

@mheon @baude Is there a similar solution for mac?
The homebrew version of fuse-overlayfs seems to be linux only

brew install fuse-overlayfs
# fuse-overlayfs: Linux is required for this software.
# libfuse: Linux is required for this software.
# Error: fuse-overlayfs: Unsatisfied requirements failed this build.

Can I use macfuse?
And if so, what would the config look like?

@mheon
Copy link
Member

mheon commented Jan 10, 2022

This needs to be configured in the VM, not on your OS X system. It's almost certainly already enabled by default, and the issue is instead that the VM is too small. @baude @ashley-cui do we have a way to resize podman-machine VMs?

@ashley-cui
Copy link
Member

No way to resize yet, best bet is to destroy and create a machine with larger space.

@ConorSheehan1
Copy link

yep removing the vm and creating a new one with more space worked, thanks!

podman machine rm
podman machine init --disk-size=$way_more_disk

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

6 participants