Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kaniko builds fail on k3d #13051

Closed
nachtmaar opened this issue Jan 12, 2022 · 6 comments
Closed

Kaniko builds fail on k3d #13051

nachtmaar opened this issue Jan 12, 2022 · 6 comments
Assignees
Labels
area/serverless Issues or PRs related to serverless
Milestone

Comments

@nachtmaar
Copy link
Contributor

Description

Kaniko build fails with message kaniko should only be run inside of a container, run with the --force flag if you are sure you want to continue

Expected result

Kaniko is able to build the function image.

Actual result

Build fails with the following message:

$ kubectl logs myapp-build-zqhk8-7xl6h
kaniko should only be run inside of a container, run with the --force flag if you are sure you want to continue

Steps to reproduce

kyma provision k3d
kyma deploy -p evaluation

Deploy the following function:

$ cat <<EOF | kubectl apply -f -
apiVersion: serverless.kyma-project.io/v1alpha1
kind: Function
metadata:
  name: ${APP_NAME}
  namespace: ${NAMESPACE}
spec:
  deps: "{ \n  \"name\": \"{${APP_NAME}\",\n  \"version\": \"1.0.0\",\n  \"dependencies\":
    {}\n}"
  source: |
    module.exports = { main: function (event, context) {
        console.log(\`event = \${JSON.stringify(event.data)}\`);
        console.log(\`headers = \${JSON.stringify(event.extensions.request.headers)}\`);
    } }
EOF

It will stay in failed state forever:

$ kubectl get functions.serverless.kyma-project.io myapp
NAME    CONFIGURED   BUILT   RUNNING   RUNTIME    VERSION   AGE
myapp   True         False             nodejs14   1         3h15m

The exact function CR:

$ kubectl get functions.serverless.kyma-project.io myapp -oyaml
apiVersion: serverless.kyma-project.io/v1alpha1
kind: Function
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"serverless.kyma-project.io/v1alpha1","kind":"Function","metadata":{"annotations":{},"name":"myapp","namespace":"default"},"spec":{"deps":"{ \n  \"name\": \"{myapp\",\n  \"version\": \"1.0.0\",\n  \"dependencies\": {}\n}","source":"module.exports = { main: function (event, context) {\n    console.log(`event = ${JSON.stringify(event.data)}`);\n    console.log(`headers = ${JSON.stringify(event.extensions.request.headers)}`);\n} }\n"}}
  creationTimestamp: "2022-01-12T13:50:18Z"
  generation: 1
  name: myapp
  namespace: default
  resourceVersion: "13892"
  uid: 30a8e36d-d47c-490f-8688-7ff9fcb26119
spec:
  buildResources:
    limits:
      cpu: 700m
      memory: 700Mi
    requests:
      cpu: 200m
      memory: 200Mi
  deps: "{ \n  \"name\": \"{myapp\",\n  \"version\": \"1.0.0\",\n  \"dependencies\":
    {}\n}"
  maxReplicas: 1
  minReplicas: 1
  resources:
    limits:
      cpu: 25m
      memory: 32Mi
    requests:
      cpu: 10m
      memory: 16Mi
  runtime: nodejs14
  source: |
    module.exports = { main: function (event, context) {
        console.log(`event = ${JSON.stringify(event.data)}`);
        console.log(`headers = ${JSON.stringify(event.extensions.request.headers)}`);
    } }
status:
  conditions:
  - lastTransitionTime: "2022-01-12T13:50:45Z"
    message: Job myapp-build-zqhk8 failed
    reason: JobFailed
    status: "False"
    type: BuildReady
  - lastTransitionTime: "2022-01-12T13:50:18Z"
    message: ConfigMap myapp-cc98d created
    reason: ConfigMapCreated
    status: "True"
    type: ConfigurationReady
  runtime: nodejs14
  source: |
    module.exports = { main: function (event, context) {
        console.log(`event = ${JSON.stringify(event.data)}`);
        console.log(`headers = ${JSON.stringify(event.extensions.request.headers)}`);
    } }

Versions:

$ kyma version
kubeKyma CLI version: 2.0.0
Kyma cluster version: 2.0.0

$ k3d version
k3d version v5.2.1
k3s version v1.21.7-k3s1 (default)

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.0", GitCommit:"ab69524f795c42094a6630298ff53f3c3ebab7f4", GitTreeState:"clean", BuildDate:"2021-12-07T18:08:39Z", GoVersion:"go1.17.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.11+k3s1", GitCommit:"b247e74e36248fc047e5253fe7f94c6db07a5e1e", GitTreeState:"clean", BuildDate:"2021-09-20T16:15:27Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.23) and server (1.20) exceeds the supported minor version skew of +/-1

Docker Desktop 4.3.2 (72729) on Mac OS 12.1

Troubleshooting

I did discover the following issue: https://githubmate.com/repo/GoogleContainerTools/kaniko/issues/1592

Based on this, I see the problem resolves when using container=docker variable.

Lines starting with $ are to be run on the host (my Mac OS machine), lines starting with # are to be run in the container:

$ docker run --rm -ti --entrypoint "" gcr.io/kaniko-project/executor:v1.5.1-debug sh
#  echo "FROM alpine:latest" > Dockerfile

# /kaniko/executor --dockerfile Dockerfile --destination /dev/null
kaniko should only be run inside of a container, run with the --force flag if you are sure you want to continue

# container=docker /kaniko/executor --dockerfile Dockerfile --destination /dev/null
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "/dev/null": GET https://index.docker.io/v2/dev/null/blobs/uploads/: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:dev/null Type:repository]]
@nachtmaar nachtmaar added the area/serverless Issues or PRs related to serverless label Jan 12, 2022
@kwiatekus kwiatekus added this to the 2.1 milestone Jan 12, 2022
@kwiatekus
Copy link
Contributor

does k3d provisions with containerd as container runtime?
I dont think we support it yet

@nachtmaar
Copy link
Contributor Author

nachtmaar commented Jan 12, 2022

The issue is also reproducible by Dmitry Buslov. He is using the following versions:

Kyma CLI version: 2.0.0
___________________________
k3d version v5.2.2
k3s version v1.21.7-k3s1 (default)
___________________________
Docker
Client:
 Cloud integration: v1.0.22
 Version:           20.10.11
 API version:       1.41
 Go version:        go1.16.10
 Git commit:        dea9396
 Built:             Thu Nov 18 00:36:09 2021
 OS/Arch:           darwin/amd64
 Context:           default
 Experimental:      true
Server: Docker Engine - Community
 Engine:
  Version:          20.10.11
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.9
  Git commit:       847da18
  Built:            Thu Nov 18 00:35:39 2021
  OS/Arch:          linux/amd64
  Experimental:     true
 containerd:
  Version:          1.4.12
  GitCommit:        7b11cfaabd73bb80907dd23182b9347b4245eb5d
 runc:
  Version:          1.0.2
  GitCommit:        v1.0.2-0-g52b36a2
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0 (edited) 

@pPrecel
Copy link
Contributor

pPrecel commented Jan 13, 2022

If I understand it correctly this issue occurs only on the older Kaniko version and it's already fixed on the latest revision of thee v1.7.0. I'm going to bump Kaniko image here.

The example posted in the troubleshooting section you use v1.5.1 ($ docker run --rm -ti --entrypoint "" gcr.io/kaniko-project/executor:v1.5.1-debug sh) to it can't work. In Kyma on the main we are using v1.7.0 version but probably without the fix - It will be changed after merging my PR.

Links:
Main Kaniko issue - GoogleContainerTools/kaniko#1592
Two Kaniko fixes:

@nachtmaar
Copy link
Contributor Author

@pPrecel You are right, problem does not exist with kaniko v1.7.0

$ docker run --rm -ti --entrypoint "" gcr.io/kaniko-project/executor:v1.7.0-debug sh
Unable to find image 'gcr.io/kaniko-project/executor:v1.7.0-debug' locally
v1.7.0-debug: Pulling from kaniko-project/executor
b9f241f9d2e8: Pull complete
1de4761794ba: Pull complete
585edf44cc3b: Pull complete
dd687676e859: Pull complete
5629f641d093: Pull complete
8c9c08846cad: Pull complete
f77ae48936b0: Pull complete
9fa64f572b2c: Pull complete
84fdf0b3d632: Pull complete
45c12e3304a9: Pull complete
97702ca62d42: Pull complete
Digest: sha256:88dacc7ea3f5c04709eae96776693c717869405364b19d6e78850fe54c63c6a2
Status: Downloaded newer image for gcr.io/kaniko-project/executor:v1.7.0-debug

/workspace # echo "FROM alpine:latest" > Dockerfile
/workspace # /kaniko/executor --dockerfile Dockerfile --destination /dev/null
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "/dev/null": GET https://index.docker.io/v2/dev/null/blobs/uploads/: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:dev/null Type:repository]]

@pPrecel
Copy link
Contributor

pPrecel commented Jan 17, 2022

Together with @nachtmaar, we've tested the new kaniko version with kyma on k3d and unfortunately, it's still not working. It means that our case is probably not the same as in the issue described in the Troubleshooting section. This issue can be easily fixed by adding the --force flag to the kaniko arguments to turn off detection of running kaniko inside the container. I can't prove that this --force flag is safe for our all cases and because of this I propose to describe this case and how to override kaniko arguments during the installation (in our documentation)

@kwiatekus
Copy link
Contributor

We are not entirely sure what could be the reason.
We clearly see it's not a common case.
We provided a description for bypassing kaniko validation in a dedicated troubleshooting guide
https://kyma-project.io/docs/kyma/main/04-operation-guides/troubleshooting/svls-04-function-build-failing-on-k3d/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/serverless Issues or PRs related to serverless
Projects
None yet
Development

No branches or pull requests

3 participants