Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing DockerNAT after upgrading to Docker Desktop 2.2.0 on Windows #5538

Closed
LeonSebastianCoimbra opened this issue Jan 22, 2020 · 69 comments
Closed

Comments

@LeonSebastianCoimbra
Copy link

LeonSebastianCoimbra commented Jan 22, 2020

After updating my local Docker Desktop from 2.1.0.5 to 2.2.0 i was unable to use the IP 10.0.75.1.

After some investigations, i found that the entire definition of "DockerNAT" disappeared.

I searched google for possible solutions but i didn't find anything useful.

I tried to delete and reinstall the 2.2.0 version but without success.

Through the page https://docs.docker.com/docker-for-windows/release-notes/ i was able to retrieve a functional version of the latest 2.1.0.5; once installed all started to work as before. The output of the ipconfig command is:

`

Ethernet card vEthernet (DockerNAT):
Connection specific DNS suffix:
IPv4 Address. . . . . . . . . . . . : 10.0.75.1
Subnet mask . . . . . . . . . . . . . : 255.255.255.240
Default gateway . . . . . . . . . :
`
This is the part that was missing with the new installation.

@straga
Copy link

straga commented Jan 22, 2020

Yep. They did that. I am already updated and lose DockerNAT too.

@mikeparker
Copy link
Contributor

We deliberately removed the DockerNAT in the latest version because it's no longer necessary due to our new filesharing implementation.

There are existing, much more suitable methods for doing everything around this network adapter, can you perhaps describe why you want to use this IP and I can point you to a good solution?

@LeonSebastianCoimbra
Copy link
Author

Thanks for the fast replay.

At the moment i am using the 10.0.75.1 IP address as convenient way to address my machine both by the browser (running in the host) and the internal services (running inside the docker network).

I modified the "hosts" file to be able to use a DNS-like name and since i am still in development phase it is more than sufficient.

I can say that in the final production environment this problem will not arise (i will re-organize my services in a convenient way), but in development, where i have limited resources, it is very convenient to do so.

@ChristopherMillon
Copy link

@mikeparker what are those solutions? The missing DockerNAT is an issue for us as well

@mikeparker
Copy link
Contributor

mikeparker commented Jan 22, 2020

@LeonSebastianCoimbra
Generally implementing something which allows you to access the host from a container AND use the same mechanism for container-to-container communication is not a best practice, because at scale the system often needs multiple hosts across a cluster, and so the hostname (and IP) will change. Generally for container to container communication we have DNS names provided by docker compose for example.

However, if you're looking for a solution that is 'developer only' (this won't necessarily work in production) and you are trying to standardise a mechanism for talking to containers either from a different container or on the host, you need to use a DNS name (not an IP address) as the DNS name will resolve to different IP addresses inside containers and on the host - we no longer have an IP address accessible by both the host and the container.

We technically provide a DNS name you can use for this purpose: kubernetes.docker.internal this is currently used by, as you can guess, kubernetes, allowing us to share the kube context on the host and inside containers. There is also host.docker.internal which may work depending on what you're trying to do exactly.

If this is something that is required maybe we need to look again at the feature and make it more of a standard feature - any more information on your use case would be useful.

@ChristopherMillon let me know what your use case is and whether it's the same as the above.

@straga
Copy link

straga commented Jan 22, 2020

Yes. I use it in dev on laptop, make route for bridge docker. Startup dns-proxy-server(http://mageddo.github.io/dns-proxy-server/latest/en/1-getting-started/running-it/) in container. After that DNS record update automaticly form start-up containers. And I can access to any conteiners by DNSname.

@bondas83
Copy link

I have solution https://www.bountysource.com/issues/39154772-how-to-access-containers-by-internal-ips-172-x-x-x whih disapear after 2.2 update. To access internal container IP's. What is a solution now?

@straga
Copy link

straga commented Jan 22, 2020

Linux native installation provides access to docker network, why windows should not provide it? But we use route for that and it works. In setting we have:
Hyper-V subnet
10.0.75.0/28
default: 10.0.75.0/28

But fact script what start Hyper-V virtual machine not use it after update.

@ChristopherMillon
Copy link

@mikeparker it's basically the same, we docker inspect ... get the IP Address of the container and then communicate with it from the host for dev/testings purposes.

You recommend using DNS? How would this work exactly? Do we have to set explicit hostnames for our containers and prefix them by host.docker.internal or kubernetes.docker.internal?

@mikeparker
Copy link
Contributor

@ChristopherMillon that doesn't sound the same. Leon is talking about accessing the host machine. You're talking about accessing a container.

If you want to go host -> container simply expose the port and use localhost.

@mikeparker
Copy link
Contributor

@straga @bondas83 I don't understand what you're saying.

@ChristopherMillon
Copy link

@mikeparker binding onto the localhost is an issue as in case of CI/CD (or even dev machine) those ports won't necessarily be available.
We're using docker-compose, and ports are pre-defined in the docker-compose.yml

Before it worked with a route going through DockerNAT, and ensure we could run tests in parallel without worrying about used ports.

@mikeparker
Copy link
Contributor

mikeparker commented Jan 22, 2020

@ChristopherMillon can you elaborate a bit more:
a) why don't you have control over which ports are in use on a CI/CD machine? Usually CI/CD environments are designed to be reproducible.
b) You say your ports are pre-defined in your docker-compose file but you also say you can't guarantee which ports you're using, can you clarify?
c) When you say tests in parallel are you talking about spinning up the same docker-compose set of services multiple times on the same host for different purposes?

Anything further you could tell me about what you've been doing in the past and why would be helpful, thanks! (including the automation of docker inspect and how that fits into your workflow)

@straga
Copy link

straga commented Jan 22, 2020

@mikeparker For example a I have 10 services with 8069 port.

  1. demo1_testdrop.app.dev.local + demo1_testdrop.pg11.db.local
  2. demo10_report.app.dev.local + demo10_report.app.pg11.db.local
    ....
  3. ....

if use port: need 20 diferent ports. Remember them all, keep a list of which where.
It looks not good.

@ChristopherMillon
Copy link

ChristopherMillon commented Jan 22, 2020

@mikeparker
a) Couldn't guarantee a specific port is available on a testing node as another test set might be running as well using the same port (We run sets of tests in parallel).
b) What I meant is because we're using docker-compose, we couldn't be able to say 'use any available port on the host', hence why binding onto the localhost could be unreliable in this case.
c) Multiple docker-compose sets are trying to spawn up the same services, using the same ports for different test sets.

After writing all this, I realised we could do fine with binding onto host, sometimes we might have to manually docker rm in order to free the ports up and remove test sets parallelism.

We got C# test assemblies calling docker-compose, a particular set of tests will spawn up a particular docker compose environment in order to run the tests, and we do like to be able to run those sets of tests in parallel as it makes things faster.
Nice thing about having C# test assemblies doing all this, is the debugging capability in our services.

Those tests assemblies docker-compose up -f somefiles.yml and get all the associated IP Addresses for each container, then we use those IP Addresses (say I want the IP of the kafka1 container) to arrange the environment, act and assert what happened.

@mikeparker
Copy link
Contributor

@ChristopherMillon if I understand you correct you're running C# tests on the host, and the C# tests themselves trigger docker-compose to spin up containers necessary for the test, then the test runner on the host communicates with those containers? And you're running these tests in parallel and some of them share docker-compose files to spin up infrastructure which would naturally have the containers overlap port-wise?

Where does the DockerNAT come into this? You say get all the associated IP Addresses for each container but this doesnt mention the IP address of the host. Do you need to communicate from the container to the host, or purely from the host to the container? (or both)

I'm happy to go back and forth some more, but I think it's going to be quicker if you can provide a fuller reproduction of the problem you're seeing as it's hard to diagnose like this. Can you describe a set of steps that I can perform to see the exact problem you're having?

@LeonSebastianCoimbra
Copy link
Author

@mikeparker
However, if you're looking for a solution that is 'developer only' (this won't necessarily work in production) and you are trying to standardise a mechanism for talking to containers either from a different container or on the host, you need to use a DNS name (not an IP address) as the DNS name will resolve to different IP addresses inside containers and on the host - we no longer have an IP address accessible by both the host and the container.

Yes, only for development and yes, emulating it in the "hosts" file (in development i do not have access to a DNS server to do the change) i was using a personal DNS name.

@mikeparker
We technically provide a DNS name you can use for this purpose: kubernetes.docker.internal this is currently used by, as you can guess, kubernetes, allowing us to share the kube context on the host and inside containers. There is also host.docker.internal which may work depending on what you're trying to do exactly.

Thanks for the tip, i think that this one can solve my current problem for the development.

@mikeparker
If this is something that is required maybe we need to look again at the feature and make it more of a standard feature - any more information on your use case would be useful.

NOTE: this won't solve the case when i need a specific DNS name (different from the one docker provide by default), since the ip addresses pointed by "host.docker.internal" (and similar) is dynamic and changes at each reboot. The convenient part of the 10.0.75.1 was that it was static (always reliable) and gave me the ability to choose the DNS name. If you think that SSL certificates can be involved it is not a bad thing!!

Thanks to your clarifications i think that i my be ok from my side... i let you close this issue as soon as also the others participants are satisfied.

@aldobongio
Copy link

On 2.0.x and 2.1.x I relied on DockerNAT to discover the host IP and let containers (more specifically Linux Containers) communicate with the Windows host. From a Powershell script on 2.0.x and 2.1.x I discovered the host IP using the following code:

$ip=(Get-NetIPConfiguration | Where-Object { $_.InterfaceAlias -eq 'vEthernet (DockerNAT)' }).IPV4Address.IPAddress

On 2.2.x I confirm that Docker NAT is no longer available. The solution, inspired by this discussion, is to rely on docker.for.win.localhost. Basically I spawn a small Alpine container asking it to resolve docker.for.win.localhost. So I changed the Powershell line into the following:

$ip=(docker run --rm alpine sh -c 'getent hosts docker.for.win.localhost | awk ''{ print $1 }''')

Note: it returns a different IP (192.168.xxx.yyy) instead of the usual 10.0.xxx.yyy but I verified that they are equivalent and containers-to-host communication works.

@Martyrer
Copy link

Same here, as soon as i am updated to 2.2.0 DockerNAT is gone. @mikeparker you say this done on purpose? Then i'm really missing a piece now, how can i access running Docker container by it's IP address directly from windows machine?
Prior to 2.2.0 this has been achieved via DockerNAT network, i've been adding a route to the routing table, some convenient DNS name in hosts file and accessed running web services within my container, or DB...
Now, with DockerNAT gone, none of that work... do i miss something and there is another way to achieve the same behavior?

@IanIsFluent
Copy link

IanIsFluent commented Jan 23, 2020

I logged the same issue with another bug #5560 (sorry, I've now closed that).

But as I say there, it would've been super nice to have a clear 'breaking changes' notification in this release, as it was removed on purpose 😄

... sorry to be that guy: https://xkcd.com/1172/

@LeonSebastianCoimbra
Copy link
Author

Sorry to bother you again, but i find another reason to miss the fixed ip address 10.0.75.1 just now.

It is always a matter of development: i am using google OAuth2 authentication system to access GDrive in my application. In development, i defined (in hosts file) a DNS name (say for example "goofy.mikeymouse.com") that pointed to 10.0.75.1 and i used it in the configuration of the google application as "allowed redirect URI".

As suggested, i switched to "host.docker.internal" (or gateway.docker.internal or so on) instead of my DNS name, but when i changed the allowed redirect URI in the google console i get an error: "Invalid redirect: must end with a top-level public domain (e.g. .com or .org).".... ".internal" is not accepted.

Since the ip address related to the .internal pseudo-names is dynamic, i can not create my on DNS name unless i modify by hand the hosts file every time that i restart docker... which is not acceptable.

@bondas83
Copy link

How to aceess docker container from windows?
I have 10 of developing project with mysql, redis, php fpm, nginx ... All has the same ports and ssl. It is hard to remember localhost:443, localhost:444 ... (but in 2.2 it works)
It is much metter to remember https://project1.local https://prooject2.local (but in 2.2 i dont know solution, in 2.1 there was workaround with "route /P add 172.0.0.0 MASK 255.0.0.0 10.0.75.2")

maybe i can access project1.docker.internal ? :D

@mikeparker
Copy link
Contributor

Thanks everyone for all the details, keep it coming, it all helps us prioritise the features here as we understand more use cases. There are 2 separate issues here so I'll address them separately: Container-to-Host and Host-to-Container.

Connecting Container-to-Host

There was an unofficial workaround to do this previously which was not a supported feature, using the DockerNAT IP address. Our docs do not mention this: https://docs.docker.com/docker-for-windows/networking/ and you can see a reference to the docker0 interface which exists on Linux but not Mac or Windows which is a similar thing.

Right now you can workaround this using host.docker.internal which maps to your current IP address, which changes when you change network. You can also use kubernetes.docker.internal which always maps to 127.0.0.1 on the host and internally we map it differently to ensure the traffic gets through. HOWEVER: These will not work on a Linux host! Don't deploy this to a linux host and expect it to work.

We are currently trying to standardise the use of host.docker.internal across platforms so right now this only works on Windows and Mac, getting this into the Linux system is harder. This may become fully supported at some later date, but be aware this solution does not scale. If you use an orchestrator (i.e. swarm/kubernetes) across multiple hosts (your docker containers are split across 2 machines for example), you can't guarantee which host you're actually talking to as your container could be running on either host, so really your best bet is to put the workload from the host inside a container as well - this is using docker 'as its designed'.

@aldobongio I suspect docker.for.win.localhost no longer works, except for machines that already have this entry injected in the hosts file. I would change to host.docker.internal or kubernetes.docker.internal to be future proof.

Connecting Host-to-Container

Connecting host->container via IP address (and if you define a DNS name manually) is not a supported feature. Our documentation explicitly calls out that you cannot do this (although technically, you could until now, with an undocumented hack and manually routing work): https://docs.docker.com/docker-for-windows/networking/

Per-container IP addressing is not possible
Our current recommendation is to publish a port, or to connect from another container. This is what you need to do even on Linux if the container is on an overlay network, not a bridge network, as these are not routed.

Similarly, we don't support directly addressing a specific container with a DNS name. This would mean intercepting all network traffic and redirecting certain requests, but this is something we could support in future if its popular.

If you simply want to open a browser connecting to these containers, and you can't remember which port is which container, try using the new Dashboard feature released in 2.2.0.0 and open the browser through that. Another idea is to use browser bookmarks.

Ideally the workflow to use if you want to connect automatically is to use ports or connect from another container. Whatever your host is doing, can you put it in a container? The second option is to use ports. You can give the container a port range rather than a specific port in order to avoid clashes. To know ahead of time which port it'll use you can automate the setting of ports using scripting. There are different ways to do this in docker-compose, one method is to use variable substitution in the port section:

https://docs.docker.com/compose/compose-file/#variable-substitution

port: ${MY_PORT}

@bondas83
Copy link

How it say: Give with one hand and take away with the other. Gave speed of volumes but took away comfort:)
DNS is solution because you can set this in router for all 100 developers. O port is not compatible with some huge projects, because all links should have port.

Maybe the is workaround to manulay create old good working DockerNAT network? or maybe:
route /P add 172.0.0.0 MASK 255.0.0.0 {gateway.docker.internal IP} or some other gateway IP ?

@Szeraax
Copy link

Szeraax commented Feb 16, 2020

@stephen-turner I'll throw in here. In testing my Django app, I use a separate server to run the static files (normally a docker container on localhost). When trying to show the app to others running on my computer, they now can't access my docker container with static files. I previously had enabled this checkbox that now is gone:
image

And its really annoying that have such a big change happen in a minor update with no warning. I'm going back a version while people figure out a better solution than hack about in the powershell guts.

Oh, and the documentation that talks about this networking issue links to a page that doesn't exist. Maybe no one has noticed that? Probably should make it link to here

@bogdanb
Copy link

bogdanb commented Feb 18, 2020

@mikeparker, @stephen-turner: Here’s our use case, it’s a bit different from the ones described above. Hopefully we’ll remain able to do something like this in the future. For now I applied the hack that restores DockerNAT, but we’re worried about the eventual switch to WSL2.

We use Docker only for testing, on the dev machines, never for deploying services. We build Jira add-ons, and we need to do all sorts of testing, both manual and automatic, with many versions of Jira and Confluence. (The two can be connected, and it’s important that we also test this.)

The current setup is that we have added many entries in the hosts file all pointing to the DockerNAT address (e.g. jira700, jira710, ..., jira850). We have scripts that can start a pair of containers (DB + Jira) running a specified version of Jira on a certain port, with a custom database snapshot. For example, I can start Jira 8.0.0 and 8.2.0, and we can address one as http://jira800:8800/ and the other as http://jira820:8820/. (All containers share a single bridge network.)

Those addresses also work from inside the containers, which is important because Jira needs to know its own address and must be able to connect to itself using it. That is a technical requirement that we cannot alter, since our clients rely on it, and thus our tests must happen in exactly that situation. The containers can also address each-other with the same addresses, which is important because Jira and Confluence can be linked and we need to test that as well.

Note that each container having a distinct DNS name is important, because cookie authentication does not look at the port part of the URL, only the domain name. If we were using something like host.docker.internal we would not be able to logged in to two different Jira containers at the same time in the same browser. (There are test cases where that is important, because Jira servers can delegate credentials between them.) Also, the exact DNS name and port used to access a container from the host (via the browser) needs to also work from inside containers (both container-to-self and container-to-other-container). We can’t have a private connection between containers, because Jira will generate URLs based on its known “base URL” that need to work from the browser (for manual testing & debugging), and from the container itself (for Jira itself), and from other containers (for connections between Jira and Confluence servers).

Using DockerNAT all the above worked automatically. We didn’t have to do any routing hacks, the only “custom” thing was adding the names to the hosts file. We didn’t even realize this was not an intentional feature until it disappeared. I’ve been searching since I updated Docker and I can’t figure out any way of doing our tests without this.

@LeonSebastianCoimbra
Copy link
Author

Using DockerNAT all the above worked automatically. We didn’t have to do any routing hacks, the only “custom” thing was adding the names to the hosts file. We didn’t even realize this was not an intentional feature until it disappeared. I’ve been searching since I updated Docker and I can’t figure out any way of doing our tests without this.

@bogdanb: if your machine has a fixed IP address you can try to use that in the hosts file instead of 10.0.75.1. That's what I'm doing now and at least for my use case it seems to be solid. If it is not (as it was for me before), then there is a problem ...

@DynaSpan
Copy link

DynaSpan commented Feb 18, 2020

Same here, DockerNAT in combination with routing + DNS entries in the host files is used to host several micro- & webservices + database instances. With DockerNAT, on each laptop or dev environment, we could use the same configs with the same local DNS entries and it just works.

I get that WSL2 will change a lot of how Docker works on Windows, but IMO is this a breaking feature without any announcement. I suspect WSL2 will open up the access to the containers by their IP addresses again?

Some of the services we use locally for dev:

  • Authentication backend API (auth-api.local)
  • Authentication frontend (auth.local)
  • MSSQL db's (db.local / ip addresses)
  • Several other backend API & frontend containers; depending on which project we're working on. (all xxx(-api).local).

Switching this to localhost & port mapping would increase the difficulty of hosting a local test environment by a lot.

@KarlNeosem
Copy link

Docker on Windows 10.
I was using dperson/samba to share docker volumes between host and containers. I mapped the samba shares using 10.0.75.xxx ip address. When I upgraded and dockerNat went away I lost the ability to do this.

I cannot forward the ports when running the, docker run -d -it -p 139:139 -p 4445:445........, it seems windows is already using port 445. If I try something like docker run -d -it -p 139:139 -p 4445:445..... the container runs but I can't get the drive mapped in windows. It appears that windows requires samba to be at port 445.
This is where I got the idea of using samba to share files.
https://www.guidodiepen.nl/2017/03/alternative-way-to-share-persistent-data-between-windows-host-and-containers/

What did I do wrong? What can I do to get this working again?

@KarlNeosem
Copy link

KarlNeosem commented Mar 13, 2020

Anyone have the dsamba container working with docker for windows since DockerNAT "Docker 2.2.0" was removed? If yes what ip address does the samba server show up as? Please give details of what you did to get it to work.

@shohamk
Copy link

shohamk commented Mar 24, 2020

So, I also use 10.0.75.2 IP to access the container from my Windows env . I do not use localhost/10.0.0.75 since Docker has lots of unresolved issues (as far as I'm updated) with the VPNKit component, which crushes a lot and make the container ports unavailable from localhost.

any solution for that?

@Neunerlei
Copy link

We had the same problem, as we were utilizing the dockerNAT to keep multiple containers running with the same exposed ports (80/443) for our web-development.

I solved the problem after I discovered that you could bind a docker port to a loopback adapter like: "127.0.0.1:80:80". This means you have the whole range from 127.0.0.1 up to 127.255.255.254 to attach to.

So in Project A, we now use 127.0.0.5:80:80, and in project B, we use 127.0.0.6:80:80 and so on.
In your hosts file, you can now still map 127.0.0.5 to example.local and 127.0.0.6 to another-example.local with the same effect you had before, when using dockerNAT.

The only downside is that you have to map each project to a static IP instead of the dynamic IP range we had before, but we can live with that.

I hope this helps someone :)

@KarlNeosem
Copy link

Neunerlei
"I solved the problem after I discovered that you could bind a docker port to a loopback adapter like: "127.0.0.1:80:80". This means you have the whole range from 127.0.0.1 up to 127.255.255.254 to attach to."
"In your hosts file, you can now still map 127.0.0.5 to example.local and 127.0.0.6 to another-example.local with the same effect you had before, when using dockerNAT."

Can you provide more detail on how to do this?

@Neunerlei
Copy link

Neunerlei commented Apr 8, 2020

@KarlNeosem Of course, as you can see here, it is possible to bind a port to a loopback IP.

All you have to do now is to find a free loopback IP (I use 127.055.0.1 as minimum IP) and map that in your docker-compose.yml file.

version: "3.0"
services:
  test:
    image: jwilder/whoami
    ports:
      - 127.55.0.1:80:8000

Now add this to your hosts file:

127.55.0.1 example.org

And that's it. If you have multiple projects use a new IP for each of them 127.55.0.2, 127.55.0.3...

This only works when you call the container from the host machine, but as a development environment, it works perfectly. Sadly it does not work in docker-toolbox as far as I tested it there.

@RockyS007
Copy link

Hi Team,

I am trying to host a build agent on container from my desktop in which I will pass values as arguments for downloading the agent from particular URL.
I have requirement for both Linux and Windows,
Linux: I have successfully build container where the agent is downloaded on the container with the help of startup script and it is working as expected.
Windows: But here container is running. It is exiting by giving an error "The remote name could not be resolved: URL"
Can anyone help me here. Same settings and URL to download agent software is used but it successful in linux not in Windows. I can see network setting s were missing in Windows. How can i resolve?

@Dashue
Copy link

Dashue commented Apr 21, 2020

Another one here, trying to reach a local microsoft sql server instance from the containers. Developing a microservices architecture locally.

@KarlNeosem
Copy link

"All you have to do now is to find a free loopback IP (I use 127.055.0.1 as minimum IP) and map that in your docker-compose.yml file."

I see what you are doing here. However when I try to start my container at lets say 127.55.0.1:445:445 it complains that port 445 is in use. I have to use port 445 or windows cannot map the samba shares. I believe I need to start the container with its own IP accessible from the Host.

With DockerNAT the container had it's own IP address reachable from the host.
Does anyone know how to recreate the DockerNAT?

or:
Is there way to share Docker volumes with the Host System.

@luke7263
Copy link

luke7263 commented May 7, 2020

After updating my local Docker Desktop from 2.1.0.5 to 2.2.0 i was unable to use the IP 10.0.75.1.

After some investigations, i found that the entire definition of "DockerNAT" disappeared.

I searched google for possible solutions but i didn't find anything useful.

I tried to delete and reinstall the 2.2.0 version but without success.

Through the page https://docs.docker.com/docker-for-windows/release-notes/ i was able to retrieve a functional version of the latest 2.1.0.5; once installed all started to work as before. The output of the ipconfig command is:

`

Ethernet card vEthernet (DockerNAT):
Connection specific DNS suffix:
IPv4 Address. . . . . . . . . . . . : 10.0.75.1
Subnet mask . . . . . . . . . . . . . : 255.255.255.240
Default gateway . . . . . . . . . :
`
This is the part that was missing with the new installation.

Hi,

If I have correctly understood your problem which looks like same than mine today - I was using 10.0.75.1 in my local Windows host file to resolve DNS names - , I have replaced in my hosts file 10.0.75.1 by 192.168.86.225 which is the (fixed) IP of Hyper-V Virtual Ethernet Adapter created with docker 2.2 installation.
Then everything works fine, I can http://mylocalname.com in my computer to work completely locally, even without network. I was working that way for many years with DockerNAT, it seems ok now.

@davidtvs
Copy link

My use case is similar to @snekcz. I use docker as a container for my development tools - I code through VS Code Remote Containers and use X11 to forward windows from the container to Windows. As @snekcz mentioned, X11 is now broken. Additionally, I used CNTLM to authenticate behind a corporate proxy and that has stopped working too since there is no longer a static IP it can listen to.

@bondas83
Copy link

bondas83 commented May 15, 2020

I updated to Docker Desktop 2.3, ant there is no MobyLinux.ps1, where we can do workaround with domains.

Maybe there is new features or new workaround

@stephen-turner
Copy link
Contributor

stephen-turner commented May 15, 2020

Thanks for your comments, everyone. I'm going to close this issue now. DockerNAT itself will not be coming back for all the reasons I explained in #5538 (comment). However, we have heard the feature request of providing an IP address for each container, so I have added an item to our public roadmap at docker/roadmap#93 to request that functionality. (BTW if you haven't seen our roadmap before, please do feel free to browse it and suggest your top feature requests there). Thank you.

@fairmonk
Copy link

with 2.1.0.5 version I had services like mysql, mosquitto running on a dedicated bridged network.
other bunch of services would run on "host" network and they could connect to mysql and mosquitto using 127.0.0.1:8883, localhost:3306 or 127.0.0.1:3306

Now this is all gone, not working anymore. The application was ported to docker desktop ( it has all the connection strings hardcoded ) It's quite an effort to update all the 127.0.0.1 and localhost strings with container names in order to make it work in latest docker desktop version. Is there a way to mitigate this without having to change application with all the hardcoded strings to match container names ?

@n10000k
Copy link

n10000k commented May 27, 2020

@stephen-turner Remember the time you tried making everyone install Docker Desktop from the Microsoft store.. reminds me of that day. This is a breaking change.. YET.. no one knew about it until everything broke.. cheers for that.

@luke7263
Copy link

To me this is a lack of Docker Windows configuration.
I found a good solution for few days : at each reboot I just force the interface IP address and use the first one I had set in my local host names.
For this, 2 choices :

  • change IP address with Windows 10 UI
  • the one I use : change IP address with a script ran at boot as administrator. Just create a file initip.ps1 containing that line : netsh interface ip set address name="vEthernet (Default Switch)" static 172.17.191.1 255.255.255.0 (or any local IP of your choice, I don't need to change the gateway) and launch it when you need, even if Docker is running no problem, no need to restart anything.
    And now at each reboot I have that IP set, so my local host names resolution is perfect, I use my Windows 10 with real hostnames but everything is local.

@docker-robott
Copy link
Collaborator

Closed issues are locked after 30 days of inactivity.
This helps our team focus on active issues.

If you have found a problem that seems similar to this, please open a new issue.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle locked

@docker docker locked and limited conversation to collaborators Jul 13, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests