Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

validate ports provided by user to --ports flag on docker driver #10495

Closed
magnus-larsson opened this issue Feb 17, 2021 · 6 comments · Fixed by #12233
Closed

validate ports provided by user to --ports flag on docker driver #10495

magnus-larsson opened this issue Feb 17, 2021 · 6 comments · Fixed by #12233
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@magnus-larsson
Copy link

Summary

I can’t recreate minikube cluster on WSL2 when using the --ports option to expose ports.
The minikube start command fails on connecting to the various ports that is sets up on the Docker container.

The problem does not occur when I don’t use the --ports option.

Workarounds:

  1. Map the ports used by the previous minikube cluster using an extra --ports option, then a new minikube cluster is created.
  2. Reboot Windows

Environment

  • WSL 2 in Windows 10 1909, 18363.1379
  • Ubuntu 20.04
  • Docker for Windows 3.1.0
  • Minikube 1.17.1

Steps to reproduce the issue:

  1. Reboot Windows without any minikube cluster created

  2. Create minikube cluster

    minikube start --driver=docker --ports=80:80 --ports=443:443
    
  3. Delete the minikube cluster

    minikube delete
    
  4. Recreate the minikube cluster and enable logging to see the error

    minikube start --driver=docker --ports=80:80 --ports=443:443 -v=1 --alsologtostderr
    

    The command now reports errors like:

    I0217 11:05:40.014928    3353 main.go:119] libmachine: Error dialing TCP: dial tcp 127.0.0.1:49160: connect: connection refused
    
  5. WORKAROUND Need a reliable and low latency local cluster setup for Kubernetes  #1: Recreate the minikube cluster mapping the ports used by the previous minikube cluster

    1. Lookup ports used by minikube

      docker ps
      

      Sample response:

      f73deb28d024   gcr.io/k8s-minikube/kicbase:v0.0.17   "/usr/local/bin/entr…"   About a minute ago   Up About a minute   0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 127.0.0.1:49160->22/tcp, 127.0.0.1:49159->2376/tcp, 127.0.0.1:49158->5000/tcp, 127.0.0.1:49157->8443/tcp   minikube
      
    2. Recreate the minikube cluster and map the ports used by the previous minikube cluster

      minikube delete
      minikube start --driver=docker --ports=80:80 --ports=443:443 --ports=49157-49160:49157-49160
      

    Sometimes I have to repeat this procedure a couple of times to get rid of the connection refused error

  6. WORKAROUND Support mounting host directories into pods #2: Delete the minikube cluster and reboot Windows

    Trying to avoid this workaround, but sometimes it seems to be the only way out of the problem...

NOTE: Recreating the minikube cluster without the --ports option works without any problem:

These commands can be run multiple times without any errors:

minikube delete
minikube start --driver=docker
@priyawadhwa priyawadhwa changed the title Recreate minikube cluster on WSL2 using “minikube delete” and “minikube start --driver=docker --ports...” fails on "connection refused" errors --ports flag fails on minikube start with docker driver and WSL2 Feb 24, 2021
@priyawadhwa
Copy link

Hey @magnus-larsson thanks for opening this issue, it seems like a bug with our --ports flag. I'm not super familiar with why this would be happening, @medyagh would you have any idea about this?

@priyawadhwa priyawadhwa added the kind/support Categorizes issue or PR as a support question. label Feb 24, 2021
@medyagh medyagh added priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it. labels Feb 24, 2021
@magnus-larsson
Copy link
Author

We did som more testing yesterday and we noticed that if we map the ports to non system ports (> 1023) it seems to work without any problems. For example, the following command works fine:

minikube start --driver=docker --ports=8080:80 --ports=8443:443

Without root privileges, we should not be able to map to system ports, I guess?

We also tried sudo minikube start --driver=docker --ports=80:80 --ports=443:443, but it results in the error message The "docker" driver should not be used with root privileges.

Since I registered the issue, we have tried out different versions of the WSL kernel, resulting in different error symptoms when mapping to system ports.

So, we guess there is some problem with the error handling somewhere (in WSL, Docker, and/or Minikube...) when trying to map to system ports without sufficient privileges.

Can minikube be enhanced to throw a proper error message when the user tries to map to system ports without sufficient privileges?
I guess that would have helped us avoid mapping to system ports and getting stuck as described above.

@medyagh
Copy link
Member

medyagh commented Feb 24, 2021

@magnus-larsson that is a great suggestion ! if we provide validation on the ports this will save a lot of other people's time to not debug this !
I wonder if this limitation exists on mac and linux too

I would accept any PR that would do a validation on the ports specified by the user

@medyagh medyagh changed the title --ports flag fails on minikube start with docker driver and WSL2 validate ports provided by user to --ports flag on docker driver Feb 24, 2021
@medyagh medyagh added needs-solution-message Issues where where offering a solution for an error would be helpful help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it. labels Feb 24, 2021
@priyawadhwa priyawadhwa added the kind/feature Categorizes issue or PR as related to a new feature. label Feb 25, 2021
@spowelljr spowelljr added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Apr 7, 2021
@MadhavJivrajani
Copy link
Contributor

Hi! I'd like to work on this if possible 😄
/assign

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 8, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 7, 2021
@MadhavJivrajani MadhavJivrajani removed their assignment Aug 7, 2021
@sharifelgamal sharifelgamal removed priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-solution-message Issues where where offering a solution for an error would be helpful labels Aug 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants