Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate the old tcp port for docker-env #9229

Open
afbjorklund opened this issue Sep 13, 2020 · 9 comments
Open

Deprecate the old tcp port for docker-env #9229

afbjorklund opened this issue Sep 13, 2020 · 9 comments
Labels
co/runtime/docker Issues specific to a docker runtime kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@afbjorklund
Copy link
Collaborator

afbjorklund commented Sep 13, 2020

Currently we are using a standalone tcp:// docker daemon, listening on port 2376.

export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/home/anders/.minikube/certs"

We can use ssh:// and connect directly to the unix socket instead, simplifying things.

This could use either the current ssh shell tunnel, or we could use the regular address...

export DOCKER_HOST="ssh://docker@192.168.99.100:22"
ssh-add /home/anders/.minikube/machines/minikube/id_rsa

But we wouldn't have to manage all the extra ssl certificates for https, when using ssh.

/home/anders/.minikube/certs
├── ca-key.pem
├── ca.pem
├── cert.pem
└── key.pem

0 directories, 4 files

And this allows for having the docker-daemon socket-activated (on-demand) in the future...

Requirements: Docker 18.09 or later

Note that we already support both methods of connecting, so it can be a gradual change.

Current config: /usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock

https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option

See #9232

@afbjorklund afbjorklund added kind/feature Categorizes issue or PR as related to a new feature. co/runtime/docker Issues specific to a docker runtime labels Sep 13, 2020
@sharifelgamal sharifelgamal added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Sep 16, 2020
@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Oct 25, 2020

There is now an implementation in PR #9548, to allow you to try this out...

There are some quirks with this yet, to be sorted out by Docker upstream.

eval $(./out/minikube docker-env --ssh-host)

You will have to add the key, otherwise it will ask for the password every time.

ssh-password-authentication

ssh-add $(./out/minikube ssh-key)

You will have to add the host key too, since there is no setting to "ignore hosts".

ssh-host-key-verification

(currently done by choosing answer "yes")

Podman has variables for this, but Docker doesn't have those features available.


Will look at saving the ssh host key on boot, instead of just disabling that ssh feature:

-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null

But it is a separate feature, docker has no support but delegates everything to ssh...

It has been in the dead docker-machine backlog "for a while": docker/machine#534

One approach, which is not as secure, would be to get the current host keys using ssh:

ssh-keyscan -p 39499 127.0.0.1 >> ~/.ssh/known_hosts

UPDATE: now implemented as "ssh-host" (since "ssh-key" was already taken)

./out/minikube ssh-host >> ~/.ssh/known_hosts
minikube ssh-host --append-known

@afbjorklund
Copy link
Collaborator Author

You can make podman use ssh as well, by removing the variable with the key and adding secure:

export CONTAINER_HOST="${CONTAINER_HOST}?secure=True"
unset CONTAINER_SSHKEY

Then it will also talk to the ssh-agent for the identity, and check ~/.ssh/known_hosts for the host key.

See #9535

The podman default is to ignore the host key, and use the private key in the path of the variable:

export CONTAINER_HOST="ssh://docker@127.0.0.1:39499/run/podman/podman.sock"
export CONTAINER_SSHKEY="/home/anders/.minikube/machines/minikube/id_rsa"

This makes the environment variables stand-alone, without having to involve ssh-add and ssh-keyscan.

@afbjorklund
Copy link
Collaborator Author

Two issues with the host key handling are:

  1. Each time the port changes, there will be a new line in the known_hosts (due to the changed [127.0.0.1]:123456 host)
  2. If just doing the simple >> append above, yet another line will be added - even if it is identical to the already existing lines...

The identity key handling is a little smarter.

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Nov 8, 2020

Added an option to add the key automatically:

minikube docker-env --ssh-add

You can run it at the same time as env, like so:

eval $(minikube docker-env --ssh-host --ssh-add)

This only needs to be run on the first invocation.
But there is no harm in running it every time either.

You can view the current agent identities with ssh-add -l

@afbjorklund
Copy link
Collaborator Author

Added an option to do this automatically: ssh-host --append-known

Host added: /home/anders/.ssh/known_hosts ([127.0.0.1]:35855)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 12, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 11, 2021
@ilya-zuyev ilya-zuyev removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Apr 21, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 20, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 19, 2021
@sharifelgamal sharifelgamal added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Sep 15, 2021
@spowelljr spowelljr removed the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Jan 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/runtime/docker Issues specific to a docker runtime kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

7 participants