Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Squash docker images to have less layers and size #4170

Merged
merged 1 commit into from
Oct 20, 2024
Merged

Squash docker images to have less layers and size #4170

merged 1 commit into from
Oct 20, 2024

Conversation

nvuillam
Copy link
Member

No description provided.

@echoix
Copy link
Collaborator

echoix commented Oct 20, 2024

Is it possible to not squash all layers at once? Pulling performance will be worse, as it would need to wait to complete the complete image before starting to verify, then extract. Any change will need to have all contents changed.

It's possible to change some dockerfiles to have multiple layer operations combined together by using multistage builds and copying the result of multiple layers as one layer. I don't know if there are other ways to not end up with that big squash.

@echoix
Copy link
Collaborator

echoix commented Oct 20, 2024

And size-wise, it's true when you waste space by deleting in another layer, or changing the same file later. Last time I checked, we were pretty good at that, and we weren't wasting much. Pretty much some tiny files that were touched by tools that we couldn't remove.

@nvuillam
Copy link
Member Author

@echoix i don't have other ideas when I see simple docker pulls fail, like this one :/

https://github.com/oxsecurity/megalinter/actions/runs/11427413336/job/31791515470

Copy link
Contributor

github-actions bot commented Oct 20, 2024

🦙 MegaLinter status: ⚠️ WARNING

Descriptor Linter Files Fixed Errors Elapsed time
✅ API spectral 1 0 1.15s
⚠️ BASH bash-exec 6 1 0.02s
✅ BASH shellcheck 6 0 0.21s
✅ BASH shfmt 6 0 0 0.8s
✅ COPYPASTE jscpd yes no 3.77s
✅ DOCKERFILE hadolint 128 0 14.39s
✅ JSON jsonlint 20 0 0.19s
✅ JSON v8r 22 0 29.89s
⚠️ MARKDOWN markdownlint 266 0 297 34.29s
✅ MARKDOWN markdown-table-formatter 266 0 0 154.22s
⚠️ PYTHON bandit 212 66 3.93s
✅ PYTHON black 212 0 0 7.23s
✅ PYTHON flake8 212 0 2.96s
✅ PYTHON isort 212 0 0 1.61s
✅ PYTHON mypy 212 0 23.22s
✅ PYTHON pylint 212 0 33.37s
✅ PYTHON ruff 212 0 0 0.86s
✅ REPOSITORY checkov yes no 49.14s
✅ REPOSITORY git_diff yes no 0.74s
⚠️ REPOSITORY grype yes 24 14.65s
✅ REPOSITORY secretlint yes no 16.14s
✅ REPOSITORY trivy yes no 24.18s
✅ REPOSITORY trivy-sbom yes no 0.56s
⚠️ REPOSITORY trufflehog yes 1 13.5s
✅ SPELL cspell 713 0 12.34s
⚠️ SPELL lychee 348 11 58.43s
✅ XML xmllint 3 0 0 0.85s
✅ YAML prettier 160 0 0 6.05s
✅ YAML v8r 102 0 195.5s
✅ YAML yamllint 161 0 2.13s

See detailed report in MegaLinter reports

MegaLinter is graciously provided by OX Security

@echoix
Copy link
Collaborator

echoix commented Oct 20, 2024

@echoix i don't have other ideas when I see simple docker pulls fail, like this one :/

https://github.com/oxsecurity/megalinter/actions/runs/11427413336/job/31791515470

I tried it locally (on the gitpod instance I was using, not for megalinter), and get this too. So there's something else going on.

image

@echoix
Copy link
Collaborator

echoix commented Oct 20, 2024

@nvuillam
Copy link
Member Author

On a new action I don't see anything I could call to avoid such error, that's why i try the squash :/

@nvuillam nvuillam merged commit e13ac87 into main Oct 20, 2024
126 of 128 checks passed
@nvuillam nvuillam deleted the ci/squash branch October 20, 2024 15:31
@echoix
Copy link
Collaborator

echoix commented Oct 20, 2024

Ah see that the overlay2 storage engine has a limit of 125 layers: docker/for-linux#414 (comment)

But how we end up to 125 is probably there the problem. A recursive something propably happened in the chain somewhere, that is adding layers upon layers

@echoix
Copy link
Collaborator

echoix commented Oct 20, 2024

It seems that containerd backing store can handle these. So it is a possibility to use that for debugging what is causing so many layers for a worker image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants