Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker 1.7 cannot mount secrets #3072

Closed
liggitt opened this issue Jun 10, 2015 · 142 comments
Closed

Docker 1.7 cannot mount secrets #3072

liggitt opened this issue Jun 10, 2015 · 142 comments
Assignees
Labels
component/auth kind/bug Categorizes issue or PR as related to a bug. priority/P2
Milestone

Comments

@liggitt
Copy link
Contributor

liggitt commented Jun 10, 2015

When we started using secrets for deployments, we noticed that containers are not able to read mounted secrets.

The pod definitions contain Volume and VolumeMount definintions, and docker inspect shows the volumes as expected, but the container cannot read files from the mount point.

This surfaces (in the case of the deployer pod) as this error:

F0610 18:32:48.935073       1 deployer.go:65] User "system:anonymous" cannot get replicationcontrollers in project "myproject"

docker inspect <container> shows the volume mount:

...
        "Env": [
...
            "BEARER_TOKEN_FILE=/var/run/secrets/kubernetes.io/serviceaccount/token",
...
    "HostConfig": {
        "Binds": [
            "/openshift.local.volumes/pods/12f168c2-0fad-11e5-a1f9-525400553cbb/volumes/kubernetes.io~secret/deployer-token-2jxjw:/var/run/secrets/kubernetes.io/serviceaccount:ro",
...
        ],
...
    "Volumes": {
...
        "/var/run/secrets/kubernetes.io/serviceaccount": "/openshift.local.volumes/pods/12f168c2-0fad-11e5-a1f9-525400553cbb/volumes/kubernetes.io~secret/deployer-token-2jxjw"
    },
    "VolumesRW": {
...
        "/var/run/secrets/kubernetes.io/serviceaccount": false
    },
    "VolumesRelabel": {
...
        "/var/run/secrets/kubernetes.io/serviceaccount": "ro"
    }
...
@liggitt
Copy link
Contributor Author

liggitt commented Jun 10, 2015

@csrwng @smarterclayton was there a fix for the boot2docker tmpfs issue?

@smarterclayton
Copy link
Contributor

The containerized flag in the kubelet should allow you to mount.

On Jun 10, 2015, at 5:08 PM, Jordan Liggitt notifications@github.com wrote:

@csrwng @smarterclayton was there a fix for the boot2docker tmpfs issue?


Reply to this email directly or view it on GitHub.

@liggitt
Copy link
Contributor Author

liggitt commented Jun 10, 2015

curious that the kubelet doesn't complain about creating the mount

@gravis
Copy link

gravis commented Jun 10, 2015

What is that ~ in kubernetes.io~secret?

"/var/run/secrets/kubernetes.io/serviceaccount": "/openshift.local.volumes/pods/12f168c2-0fad-11e5-a1f9-525400553cbb/volumes/kubernetes.io~secret/deployer-token-2jxjw"

@sspeiche
Copy link
Contributor

Looks like this is what is keeping me from docker pulling latest and having the build successfully publish to the registry. My happy path dev exp based on docker launched origin isn't happy :(

@smarterclayton
Copy link
Contributor

I have a todo to fix this - basically we need to set the containerized flag and then add it to the e2e tests so it doesn't break.

----- Original Message -----

Looks like this is what is keeping me from docker pulling latest and having
the build successfully publish to the registry. My happy path dev exp based
on docker launched origin isn't happy :(


Reply to this email directly or view it on GitHub:
#3072 (comment)

@liggitt liggitt assigned smarterclayton and unassigned liggitt Jun 10, 2015
@gravis
Copy link

gravis commented Jun 11, 2015

Any workaround available for this, until it's fixed for good?

@smarterclayton
Copy link
Contributor

You have to write out a node config file and then set a kubeletArguments of "containerized" with "true" as the argument (you need to specify it as a nested string array in the yaml - kubeletArgument is map [string] -> []string)

kubeletArguments:
containerized:

  • true

----- Original Message -----

Any workaround available for this, until it's fixed for good?


Reply to this email directly or view it on GitHub:
#3072 (comment)

@gravis
Copy link

gravis commented Jun 11, 2015

I can't write a difference node config, as it is created when the container starts :)
Any plan to update the v0.6 docker image with this?
Thanks

@smarterclayton
Copy link
Contributor

It'll probably be in 0.6.1

----- Original Message -----

I can't write a difference node config, as it is created when the container
starts :)
Any plan to update the v0.6 docker image with this?
Thanks


Reply to this email directly or view it on GitHub:
#3072 (comment)

@gravis
Copy link

gravis commented Jun 11, 2015

Ok thanks. The sooner the better, we're stuck with this :)

@smarterclayton
Copy link
Contributor

Try #3112 - you'll need to build your own openshift/origin image from the branch with hack/build-release.sh and then hack/build-images.sh. Still testing myself.

----- Original Message -----

Ok thanks. The sooner the better, we're stuck with this :)


Reply to this email directly or view it on GitHub:
#3072 (comment)

@gravis
Copy link

gravis commented Jun 11, 2015

👍 will test that. Thanks!

@gravis
Copy link

gravis commented Jun 11, 2015

Rah, I can't compile a new version using boot2docker:

++ Building go targets for linux/amd64: cmd/openshift
        # github.com/openshift/origin/cmd/openshift
/usr/lib/golang/pkg/tool/linux_amd64/6l: running gcc failed: Cannot allocate memory

I could raise the memory on vbox, but this implies destroying the current vm, and I can't remove everything... I will wait for your tests then. Let me know if the new image can be pulled from somewhere.
Thanks

@gravis
Copy link

gravis commented Jun 12, 2015

I have just rebuilt the image from master, and the registry won't start either:

W0612 18:56:12.839622       1 container_manager_linux.go:68] [ContainerManager] Failed to ensure Docker is in a container: failed to find pid of Docker container: exit status 1
E0612 18:56:17.800696       1 kubelet.go:1111] Unable to mount volumes for pod "docker-registry-1-deploy_default": exit status 1; skipping pod
E0612 18:56:17.817346       1 pod_workers.go:108] Error syncing pod 79031390-1134-11e5-874a-8277bc1719bf, skipping: exit status 1

I'm running openshift with:

$ docker run -d -name "origin" \
 --privileged --net=host \
 -v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys:ro -v /var/lib/docker:/var/lib/docker:rw \
 openshift/origin start --public-master=$(boot2docker ip)
$ docker run -it --rm openshift/origin:latest version
openshift v0.6-179-gcc71b54
kubernetes v0.17.1-804-g496be63

Should I open another issue?

@smarterclayton
Copy link
Contributor

Can you repro with --loglevel=5 and look for the same log line? It should print the mount output.

Did you rebuild the base images as well?

On Jun 12, 2015, at 2:58 PM, Philippe Lafoucrière notifications@github.com wrote:

I have just rebuilt the image from master, and the registry won't start either:

W0612 18:56:12.839622 1 container_manager_linux.go:68] [ContainerManager] Failed to ensure Docker is in a container: failed to find pid of Docker container: exit status 1
E0612 18:56:17.800696 1 kubelet.go:1111] Unable to mount volumes for pod "docker-registry-1-deploy_default": exit status 1; skipping pod
E0612 18:56:17.817346 1 pod_workers.go:108] Error syncing pod 79031390-1134-11e5-874a-8277bc1719bf, skipping: exit status 1
I'm running openshift with:

$ docker run -d -name "origin"
--privileged --net=host
-v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys:ro -v /var/lib/docker:/var/lib/docker:rw
openshift/origin start --public-master=$(boot2docker ip)
$ docker run -it --rm openshift/origin:latest version
openshift v0.6-179-gcc71b54
kubernetes v0.17.1-804-g496be63
Should I open another issue?


Reply to this email directly or view it on GitHub.

@gravis
Copy link

gravis commented Jun 12, 2015

I0612 20:06:36.080128       1 empty_dir_linux.go:38] Determining mount medium of /var/lib/openshift/openshift.local.volumes/pods/873a34bc-113e-11e5-bf0e-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-wjx82
I0612 20:06:36.092700       1 nsenter_mount.go:139] findmnt command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /usr/bin/findmnt -o target --noheadings --target /var/lib/openshift/openshift.local.volumes/pods/873a34bc-113e-11e5-bf0e-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-wjx82]
I0612 20:06:36.133421       1 empty_dir_linux.go:48] Statfs_t of %v: %+v/var/lib/openshift/openshift.local.volumes/pods/873a34bc-113e-11e5-bf0e-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-wjx82{1635083891 4096 4762473 3929050 3681369 1218224 1171153 {[0 0]} 242 4096 4128 [0 0 0 0]}
I0612 20:06:36.133538       1 docker.go:321] Docker Container: /origin is not managed by kubelet.
I0612 20:06:36.133534       1 empty_dir.go:202] pod 873a34bc-113e-11e5-bf0e-22549f88d0e6: mounting tmpfs for volume not-used with opts []
I0612 20:06:36.133580       1 nsenter_mount.go:79] nsenter Mounting tmpfs /var/lib/openshift/openshift.local.volumes/pods/873a34bc-113e-11e5-bf0e-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-wjx82 tmpfs []
I0612 20:06:36.133612       1 nsenter_mount.go:82] Mount command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /usr/bin/mount -t tmpfs -o  tmpfs /var/lib/openshift/openshift.local.volumes/pods/873a34bc-113e-11e5-bf0e-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-wjx82]
I0612 20:06:36.176765       1 nsenter_mount.go:86] Output from mount command: nsenter: failed to execute /usr/bin/mount: No such file or directory
E0612 20:06:36.176972       1 kubelet.go:1111] Unable to mount volumes for pod "docker-registry-1-deploy_default": exit status 1; skipping pod
I0612 20:06:36.177006       1 kubelet.go:2051] Generating status for "docker-registry-1-deploy_default"
I0612 20:06:36.177512       1 server.go:569] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"docker-registry-1-deploy", UID:"873a34bc-113e-11e5-bf0e-22549f88d0e6", APIVersion:"v1", ResourceVersion:"182", FieldPath:""}): reason: 'failedMount' Unable to mount volumes for pod "docker-registry-1-deploy_default": exit status 1
I0612 20:06:36.196478       1 kubelet.go:1990] pod waiting > 0, pending
E0612 20:06:36.196881       1 pod_workers.go:108] Error syncing pod 873a34bc-113e-11e5-bf0e-22549f88d0e6, skipping: exit status 1
I0612 20:06:36.197236       1 server.go:569] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"docker-registry-1-deploy", UID:"873a34bc-113e-11e5-bf0e-22549f88d0e6", APIVersion:"v1", ResourceVersion:"182", FieldPath:""}): reason: 'failedSync' Error syncing pod, skipping: exit status 1

Relevent part I think: Output from mount command: nsenter: failed to execute /usr/bin/mount: No such file or directory

@gravis
Copy link

gravis commented Jun 12, 2015

mount is available in the container:

[root@boot2docker openshift]# ls /usr/bin/mount
/usr/bin/mount
[root@boot2docker openshift]# which mount
/usr/bin/mount

@gravis
Copy link

gravis commented Jun 12, 2015

If it's the host mount command, it's /bin/mount, not /usr/bin/mount. You should of course use which mount instead of absolute paths.

@gravis
Copy link

gravis commented Jun 12, 2015

after aliasing mount on the host, I have this:

I0612 20:17:48.342643       1 nsenter_mount.go:82] Mount command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /usr/bin/mount -t tmpfs -o  tmpfs /var/lib/openshift/openshift.local.volumes/pods/873a34bc-113e-11e5-bf0e-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-wjx82]
I0612 20:17:48.353253       1 nsenter_mount.go:86] Output from mount command: mount: mounting tmpfs on /var/lib/openshift/openshift.local.volumes/pods/873a34bc-113e-11e5-bf0e-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-wjx82 failed: No such file or directory
E0612 20:17:48.353584       1 kubelet.go:1111] Unable to mount volumes for pod "docker-registry-1-deploy_default": exit status 255; skipping pod

@luxas
Copy link

luxas commented Mar 4, 2016

It says it's shared:

83 15 179:2 /var/lib/kubelet /var/lib/kubelet rw,noatime shared:1 - ext4 /dev/root rw,data=ordered

Did you get different output?

@csrwng
Copy link
Contributor

csrwng commented Mar 4, 2016

no, that's what it looks like for me as well.

@luxas
Copy link

luxas commented Mar 4, 2016

Any ideas someone why docker doesn't validate it?

@gravis
Copy link

gravis commented Mar 4, 2016

Do you use systemd to run docker?

@luxas
Copy link

luxas commented Mar 5, 2016

@gravis Yeah

/usr/bin/docker daemon -H unix:///var/run/docker.sock -s overlay --exec-opt native.cgroupdriver=cgroupfs

@gravis
Copy link

gravis commented Mar 5, 2016

it s_a_trap

Make sure your service file contains the line MountFlags=slave:

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target docker.socket
Requires=docker.socket

[Service]
Type=notify
ExecStart=/usr/bin/docker daemon -H fd://
MountFlags=slave
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0

[Install]
WantedBy=multi-user.target

@marun
Copy link
Contributor

marun commented Mar 5, 2016

systemd defaults MountFlags to 'shared', so mount propagation (and secrets) would also work by removing MountFlags or setting to empty string. A nice way to set this is via a drop in unit file (e.g. /etc/systemd/system/docker.service.d/clear_mount_propagation.conf) to override the setting in the unit file without modifying it directly:

[Service]
MountFlags=

@luxas
Copy link

luxas commented Mar 5, 2016

@marun @gravis @csrwng Thanks for your help! Managed now to run it... on a Raspberry Pi! I didn't tell you that, but all these commands ran on my Pi. Resetting MountFlags= was the solution to my problem. Now we've got rid of --containerized in kubernetes-on-arm 👍

@gravis
Copy link

gravis commented Mar 5, 2016 via email

@liggitt
Copy link
Contributor Author

liggitt commented May 10, 2016

@pmorie is this still valid?

@klaus
Copy link

klaus commented Jul 4, 2016

as of today, on stretch/sid debian it is still valid. Symptoms are a working cluster with secrets not being deployed to the pods. It can be seen as kube-dns won't fully start.

@smarterclayton smarterclayton modified the milestones: 1.2.x, 1.3.0 Jul 12, 2016
@liggitt liggitt modified the milestones: 1.3.0, 1.4.0 Sep 1, 2016
@smarterclayton smarterclayton modified the milestones: 1.4.0, 1.5.0 Jan 31, 2017
@smarterclayton smarterclayton modified the milestones: 1.5.0, 1.6.0 Mar 12, 2017
@liggitt
Copy link
Contributor Author

liggitt commented Apr 28, 2017

closing due to age and lack of activity

@liggitt liggitt closed this as completed Apr 28, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/auth kind/bug Categorizes issue or PR as related to a bug. priority/P2
Projects
None yet
Development

No branches or pull requests