Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

change in v1.20.0 kaniko executor cannot push to ttl.sh because it is not chunking the image #2978

Open
eltorio opened this issue Jan 29, 2024 · 9 comments
Labels
issue/push-fails issue/413-request-entity-too-large priority/p0 Highest priority. Break user flow. We are actively looking at delivering it. regression/v1.19.2 regression

Comments

@eltorio
Copy link

eltorio commented Jan 29, 2024

Actual behavior
Starting with v1.20.0 (v1.19.2 was OK) kaniko cannot push to ttl.sh if file is large.
ttl.sh (proxied via cloudflare) anwers

WARN[0091] Error uploading layer to cache: failed to push to destination ttl.sh/sanbox-gitea-dev-1833719228-cache:ccb241236cf364ab6706d3af30c1b59687e62ad7735342adf43f0999d152c41d: PATCH https://ttl.sh/v2/sanbox-gitea-dev-1833719228-cache/blobs/uploads/e9b70497-e8de-48ad-931e-254608b24ed9?_state=REDACTED: unexpected status code 413 Request Entity Too Large: <html>
<head><title>413 Request Entity Too Large</title></head>
<body>
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>cloudflare</center>
</body>
</html>

Expected behavior
Like in v1.19.2 image must be chuncked like it is done via docker push driver

To Reproduce
Steps to reproduce the behavior:

  1. with this Dockerfile
  2. build with :
export NAMESPACE=sandbox
export EXPECTED_REF=ttl.sh/sanbox-gitea-dev-1348421612/gitea_bitnami_custom_tilted:tilt-build-1706522532
export EXPECTED_IMAGE=sanbox-gitea-dev-1348421612/gitea_bitnami_custom_tilted
export EXPECTED_TAG=tilt-build-1706522532
export REGISTRY_HOST=ttl.sh/sanbox-gitea-dev-1348421612
export EXPECTED_REGISTRY=ttl.sh/sanbox-gitea-dev-1348421612
export DOCKER_CACHE_REGISTRY=ttl.sh/sanbox-gitea-dev-ksnke-cache
kubectl -n $NAMESPACE delete pod/kaniko ; tar -cvz --exclude "node_modules" --exclude "dkim.rsa" --exclude "private" --exclude "k8s" --exclude ".git" --exclude ".github" --exclude-vcs --exclude ".docker" --exclude "_sensitive_datas" -f -   ./Dockerfile libgitea.sh gitea-env.sh ./busybox autobackup.sh | kubectl -n $NAMESPACE run kaniko --image=gcr.io/kaniko-project/executor:latest --stdin=true --command -- /kaniko/executor -v info --dockerfile=Dockerfile --context=tar://stdin --destination=$EXPECTED_REF --cache=true --cache-ttl=4h --cache-repo=$DOCKER_CACHE_REGISTRY

Additional Information

  • Dockerfile
  • Build Context
    kaniko ran in a Kubernetes 1.29.1 cluster
  • Kaniko Image (fully qualified with digest)
    Image: gcr.io/kaniko-project/executor:latest
    Image ID: gcr.io/kaniko-project/executor@sha256:a3281846b3549af9987c7469eb340126ca6d0d47e060f8f16382f468c5610076
  • Working kaniko:
    Image: gcr.io/kaniko-project/executor:v1.19.2
    Image ID: gcr.io/kaniko-project/executor@sha256:f913ab076f92f1bdca336ab8514fea6e76f0311e52459cce5ec090c120885c8b

I use Kaniko on my development process with Tilt.dev
My current repo is https://github.com/highcanfly-club/gitea-bitnami-custom .
You can clone it, install Tilt.dev, -replace v1.19.2 with the failing image- launch tilt up
Triage Notes for the Maintainers

Description Yes/No
Please check if this a new feature you are proposing
  • - [no]
Please check if the build works in docker but not in kaniko
  • - [yes]
Please check if this error is seen when you use --cache flag
  • - [yes]
Please check if your dockerfile is a multistage dockerfile
  • - [yes]
@Silentstrike46
Copy link

Had the same issue with DigitalOcean: my usual docker image builds correctly locally and pushes correctly with the v1.19.2 version of Kaniko, but with the latest version (v1.20.0) I get the exact same "413 Request Entity Too Large" error when trying to push to DigitalOcean's container registry.

@briacl
Copy link

briacl commented Jan 30, 2024

Same here with v1.20.0 @Silentstrike46 for me and same v1.19.2 still works correctly

@herokukms
Copy link

@Silentstrike46 same here.
@eltorio thanks a lot I was thinking my Dockerfile was wrong but with your trick I tried with docker push and it works, so I rolled back to Kaniko v1.19.2 and it works….

@aaron-prindle
Copy link
Collaborator

aaron-prindle commented Jan 30, 2024

Thank you for flagging this, below is a list of all of the PRs that went in from v1.19.2 - v1.20.0:

Fixes:
fix: prevent extra snapshot with --use-new-run https://github.com/GoogleContainerTools/kaniko/pull/2943

Doc Updates:
docs: fixed wrong example in README.md https://github.com/GoogleContainerTools/kaniko/pull/2931


Dependency Updates:

chore(deps): bump golang.org/x/oauth2 from 0.15.0 to 0.16.0 https://github.com/GoogleContainerTools/kaniko/pull/2948

chore(deps): bump google-github-actions/auth from 2.0.0 to 2.0.1 https://github.com/GoogleContainerTools/kaniko/pull/2947

chore(deps): bump golang.org/x/sync from 0.5.0 to 0.6.0 https://github.com/GoogleContainerTools/kaniko/pull/2950

chore(deps): bump github.com/containerd/containerd from 1.7.11 to 1.7.12 https://github.com/GoogleContainerTools/kaniko/pull/2951

chore(deps): replace github.com/Azure/azure-storage-blob-go => github.com/Azure/azure-sdk-for-go/sdk/storage/azblob https://github.com/GoogleContainerTools/kaniko/pull/2945

chore(deps): bump golang.org/x/sys from 0.15.0 to 0.16.0 https://github.com/GoogleContainerTools/kaniko/pull/2936

chore(deps): bump google.golang.org/api from 0.154.0 to 0.155.0 https://github.com/GoogleContainerTools/kaniko/pull/2937

chore(deps): bump github.com/cloudflare/circl from 1.3.3 to 1.3.7 https://github.com/GoogleContainerTools/kaniko/pull/2942

chore(deps): bump github.com/aws/aws-sdk-go-v2/feature/s3/manager from 1.15.9 to 1.15.11 https://github.com/GoogleContainerTools/kaniko/pull/2939

chore(deps): bump AdityaGarg8/remove-unwanted-software from 1 to 2 https://github.com/GoogleContainerTools/kaniko/pull/2940

chore(deps): bump github.com/aws/aws-sdk-go-v2/service/s3 from 1.47.7 to 1.47.8 https://github.com/GoogleContainerTools/kaniko/pull/2932

chore(deps): bump github.com/aws/aws-sdk-go-v2/config from 1.26.2 to 1.26.3 https://github.com/GoogleContainerTools/kaniko/pull/2933

chore(deps): bump github.com/google/go-containerregistry from 0.15.2 to 0.17.0 https://github.com/GoogleContainerTools/kaniko/pull/2924

chore(deps): bump github.com/aws/aws-sdk-go-v2/feature/s3/manager from 1.15.7 to 1.15.9 https://github.com/GoogleContainerTools/kaniko/pull/2926

chore(deps): bump google-github-actions/setup-gcloud from 2.0.0 to 2.0.1 https://github.com/GoogleContainerTools/kaniko/pull/2927

From these changes it isn't obvious to me what might have caused this. The only thing that would make sense to me would possibly be the dep updates for containerd or go-containerregistry. Does anyone here know if those libs are experiencing a similar issue w/ their later versions?

@aaron-prindle aaron-prindle added regression regression/v1.19.2 priority/p0 Highest priority. Break user flow. We are actively looking at delivering it. issue/push-fails issue/413-request-entity-too-large labels Jan 30, 2024
@eltorio
Copy link
Author

eltorio commented Jan 30, 2024

First I ‘ll check tomorrow this change on go-containerregistry Content-Length for blob uploads on v0.17

@aaron-prindle
Copy link
Collaborator

@eltorio were you able to identify if this is an issue w/ recent changes to go-containerregistry?

@olivier-wd
Copy link

Hi,
I'm also not able to push images generated with last releases of kaniko as Cloudflare block me with a 413 error (entity too large). Switching back kaniko from tag debug to 1.19.2 solve this issue.

@boxexchanger
Copy link

Same in v1.21.1

@zgqq
Copy link

zgqq commented Jul 17, 2024

No resolved. [v1.23.2]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
issue/push-fails issue/413-request-entity-too-large priority/p0 Highest priority. Break user flow. We are actively looking at delivering it. regression/v1.19.2 regression
Projects
None yet
Development

No branches or pull requests

8 participants