Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Host unreachable from container with bridge network on Podman v5 #22653

Closed
n-hass opened this issue May 9, 2024 · 32 comments · Fixed by #22740
Closed

Host unreachable from container with bridge network on Podman v5 #22653

n-hass opened this issue May 9, 2024 · 32 comments · Fixed by #22740
Labels
kind/bug Categorizes issue or PR as related to a bug. network Networking related issue or feature pasta pasta(1) bugs or features

Comments

@n-hass
Copy link

n-hass commented May 9, 2024

Issue Description

I am running a web service on my host, which I would expect could be accessed from a bridge networked container.

This works on Podman v4.7.2: podman run --rm --network=bridge docker.io/mwendler/wget host.containers.internal:8091

The same does not work on v5.0.2, with Connecting to 10.1.26.100:8091... failed: Connection refused.

Here, 10.1.26.100 is the host's eth0 address (host.containers.internal), but the result is the same if i use the bridge's gateway IP.

Steps to reproduce the issue

Steps to reproduce the issue

  1. Host a web server on the container host
  2. Start a container with podman run with --network=bridge
  3. Attempt to connect to host using either host.containers.internal or the bridge interface's gateway IP
  4. Observe Connection refused error

Describe the results you received

Connections to host from a container in bridge network mode are refused under Podman v5.0.2 when previously on v4 this was not the case.

Describe the results you expected

Container in bridge network mode can connect to the host using host.containers.internal

podman info output

host:
  arch: amd64
  buildahVersion: 1.35.3
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - rdma
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: Unknown
    path: /nix/store/ipbgl019v93p0kz2az8vcai27bj2qvdj-conmon-2.1.11/bin/conmon
    version: 'conmon version 2.1.11, commit: '
  cpuUtilization:
    idlePercent: 40.63
    systemPercent: 23.64
    userPercent: 35.73
  cpus: 20
  databaseBackend: boltdb
  distribution:
    codename: uakari
    distribution: nixos
    version: "24.05"
  eventLogger: journald
  freeLocks: 2044
  hostname: praetor
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.8.9
  linkmode: dynamic
  logDriver: journald
  memFree: 9704091648
  memTotal: 67015405568
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: Unknown
      path: /nix/store/qd3sk2xsj9fdn4xvgicqqzd9hc5z3114-podman-5.0.2/libexec/podman/aardvark-dns
      version: aardvark-dns 1.10.0
    package: Unknown
    path: /nix/store/qd3sk2xsj9fdn4xvgicqqzd9hc5z3114-podman-5.0.2/libexec/podman/netavark
    version: netavark 1.7.0
  ociRuntime:
    name: crun
    package: Unknown
    path: /nix/store/q4xhymb7hrc0448w3vn76va86nv59b0b-crun-1.15/bin/crun
    version: |-
      crun version 1.15
      commit: 1.15
      rundir: /run/user/0/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  pasta:
    executable: /nix/store/qd3sk2xsj9fdn4xvgicqqzd9hc5z3114-podman-5.0.2/libexec/podman/pasta
    package: Unknown
    version: |
      pasta 2024_04_26.d03c4e2
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: ""
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /nix/store/qd3sk2xsj9fdn4xvgicqqzd9hc5z3114-podman-5.0.2/libexec/podman/slirp4netns
    package: Unknown
    version: |-
      slirp4netns version 1.3.0
      commit: 8a4d4391842f00b9c940bb8f067964427eb0c964
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.5
  swapFree: 0
  swapTotal: 0
  uptime: 31h 28m 10.00s (Approximately 1.29 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 4
    paused: 0
    running: 4
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 375809638400
  graphRootUsed: 142480777216
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "true"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 10
  runRoot: /run/containers/storage
  transientStore: false
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 5.0.2
  Built: 315532800
  BuiltTime: Tue Jan  1 10:30:00 1980
  GitCommit: ""
  GoVersion: go1.22.2
  Os: linux
  OsArch: linux/amd64
  Version: 5.0.2

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

Environment is a NixOS host.

Additional information

No response

@n-hass n-hass added the kind/bug Categorizes issue or PR as related to a bug. label May 9, 2024
@n-hass n-hass changed the title Host unreachable with v5 bridge network Host unreachable from container with bridge network on Podman v5 May 9, 2024
@mheon
Copy link
Member

mheon commented May 9, 2024

Can you provide a podman info from the working 4.7 install? Want to see if the network backend has changed between the two.

@n-hass
Copy link
Author

n-hass commented May 10, 2024

@mheon sure. See below

host:
  arch: amd64
  buildahVersion: 1.32.0
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: Unknown
    path: /nix/store/53lq2zdbaqqny8765mgmvw70kgslxrc9-conmon-2.1.8/bin/conmon
    version: 'conmon version 2.1.8, commit: '
  cpuUtilization:
    idlePercent: 34.78
    systemPercent: 25.36
    userPercent: 39.86
  cpus: 20
  databaseBackend: boltdb
  distribution:
    codename: uakari
    distribution: nixos
    version: "24.05"
  eventLogger: journald
  freeLocks: 2031
  hostname: praetor
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 992
      size: 1
    - container_id: 1
      host_id: 427680
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 427680
      size: 65536
  kernel: 6.8.9
  linkmode: dynamic
  logDriver: journald
  memFree: 10010148864
  memTotal: 67015405568
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: Unknown
      path: /nix/store/iyzsvszqksqlnn46bxfsn6xg56bnzk6p-podman-4.7.2/libexec/podman/aardvark-dns
      version: aardvark-dns 1.8.0
    package: Unknown
    path: /nix/store/iyzsvszqksqlnn46bxfsn6xg56bnzk6p-podman-4.7.2/libexec/podman/netavark
    version: netavark 1.7.0
  ociRuntime:
    name: crun
    package: Unknown
    path: /nix/store/djjn2p02dnh1n9k9kf66ywz8q8b95mwb-crun-1.12/bin/crun
    version: |-
      crun version 1.12
      commit: 1.12
      rundir: /run/user/1001/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    exists: true
    path: /run/user/1001/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: ""
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /nix/store/iyzsvszqksqlnn46bxfsn6xg56bnzk6p-podman-4.7.2/libexec/podman/slirp4netns
    package: Unknown
    version: |-
      slirp4netns version 1.2.2
      commit: 0ee2d87523e906518d34a6b423271e4826f71faf
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.5
  swapFree: 0
  swapTotal: 0
  uptime: 42h 40m 44.00s (Approximately 1.75 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /home/servhost/.config/containers/storage.conf
  containerStore:
    number: 17
    paused: 0
    running: 17
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/servhost/.local/share/containers/storage
  graphRootAllocated: 375809638400
  graphRootUsed: 142457978880
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 30
  runRoot: /run/user/1001/containers
  transientStore: false
  volumePath: /home/servhost/.local/share/containers/storage/volumes
version:
  APIVersion: 4.7.2
  Built: 315532800
  BuiltTime: Tue Jan  1 10:30:00 1980
  GitCommit: ""
  GoVersion: go1.21.9
  Os: linux
  OsArch: linux/amd64
  Version: 4.7.2

@coolbry95
Copy link

I am experiencing the same issue.

I am not able to connect to any port that is in use on the host. I am able to ping the host. I am also able to connect to a port that is exposed by another container.

This is inside the container.

root@0e324e1f7e88:/# nc -v 192.168.1.100 443 # this is nginx running on the host
nc: connect to 192.168.1.100 port 443 (tcp) failed: Connection refused

root@0e324e1f7e88:/# ping 192.168.1.100
PING 192.168.1.100 (192.168.1.100): 56 data bytes
64 bytes from 192.168.1.100: seq=0 ttl=42 time=0.107 ms
64 bytes from 192.168.1.100: seq=1 ttl=42 time=0.137 ms
^C
--- 192.168.1.100 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.107/0.122/0.137 ms

root@0e324e1f7e88:/# nc 192.168.1.100 2343 # this is another container
asdf
HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=utf-8
Connection: close

400 Bad Request^C

@mheon
Copy link
Member

mheon commented May 10, 2024

Can you try the 5.0 install with a container created with the --net=slirp4netns option?

@n-hass
Copy link
Author

n-hass commented May 10, 2024

@mheon Yep.
podman run --network=slirp4netns docker.io/mwendler/wget 10.1.26.100:8091 does work with the 5.0 install, no connection refused

@mheon
Copy link
Member

mheon commented May 10, 2024

@Luap99 Are we aware of this one on the Pasta side, or is this new?

@coolbry95
Copy link

I was also using pasta before on 4.x. I upgraded from fredora 39 to 40 and podman 4.x to 5.x. pasta is set in the ~/.config/containers/containers.conf.

~/.config/containers/containers.conf

[network]
default_rootless_network_cmd = "pasta"
pasta_options = ["--map-gw"]
[coolbry95@diamond ~]$ podman run -it --rm --net=slirp4netns fedora bash
[root@0b76d0341ac0 /]# nc 192.168.1.100 443
asdf
HTTP/1.1 400 Bad Request
Server: nginx
Date: Fri, 10 May 2024 02:48:08 GMT
Content-Type: text/html
Content-Length: 150
Connection: close
X-Frame-Options: SAMEORIGIN

<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>

Without the pasta_options set it also fails to connect.

@lazyzyf
Copy link

lazyzyf commented May 11, 2024

i am experiencing the same issue, how to fix it?

@cemarriott
Copy link

Experiencing the same issue here. My container host updated from CoreOS 39 to 40 yesterday. I run a certificate authority container with host network mode, and a Traefik container that is connected to a bridge network and an internal network that has all of the backend services connected for proxying through Traefik.

After the update, Traefik gets connection refused when trying to connect to the CA container on the host network.

@ctml91
Copy link

ctml91 commented May 13, 2024

I am not sure if this is the exact same issue. Whether I use bridge or host networking, some containers are not accessible using the host IP but they are using the container IP. For example, running nginx on 443 will result in connection refused using host IP and succeeds using the container IP regardless if I'm using host network or bridge with port mapping.

Now the interesting part, I reboot the host and the problem switches to a different set of containers that become inaccessible using the host IP while the nginx starts working. Each reboot seems to transfer the problem to a different container, I haven't figured out any pattern.

tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      6666/conmon
tcp        0      0 0.0.0.0:8006            0.0.0.0:*               LISTEN      6265/conmon

⬢[root@toolbox ~]# curl 192.168.1.150:8006
<success>
⬢[root@toolbox ~]# curl 192.168.1.150:443
curl: (7) Failed to connect to 192.168.1.150 port 443 after 0 ms: Couldn't connect to server

<reboot>
⬢[root@toolbox ~]# curl 192.168.1.150:443
<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.24.0</center>
</body>
</html>
⬢[root@toolbox ~]# curl 192.168.1.150:8006
curl: (7) Failed to connect to 192.168.1.150 port 8006 after 0 ms: Couldn't connect to server

Could be a different issue though as I may have had the issue on podman 4.X before upgrading. No system firewall is enabled.

[root@fedora ~]# rpm-ostree status
State: idle
Deployments:
* fedora-iot:fedora/stable/x86_64/iot
                  Version: 40.20240509.0 (2024-05-09T10:34:54Z)
               BaseCommit: 64266e7b3362d4fe8c1e02303c7dbc7cab17f0778a92c4cbe745439243c4349e
             GPGSignature: Valid signature by 115DF9AEF857853EE8445D0A0727707EA15B79CC
          LayeredPackages: toolbox

  fedora-iot:fedora/stable/x86_64/iot
                  Version: 39.20231214.0 (2023-12-15T01:47:31Z)
               BaseCommit: 922061c2981d4cd8f6301542635aa5dba5b85474782c8edbc354ba5cc344fc27
             GPGSignature: Valid signature by E8F23996F23218640CB44CBE75CF5AC418B8E74C
          LayeredPackages: toolbox
[root@fedora ~]# podman -v
podman version 5.0.2

Edit: I should add containers are being run from root user / systemd.

@Luap99
Copy link
Member

Luap99 commented May 13, 2024

@Luap99 Are we aware of this one on the Pasta side, or is this new?

Yes I know this, these are two issues really.
First using the default interface ip to connect to the host no longer works with pasta by default because pasta uses the same ip inside the namespace so the cotnainer cannot connect to that, see the pasta section here https://blog.podman.io/2024/03/podman-5-0-breaking-changes-in-detail/

Second, bridge as rootless adds the wrong host.containers.internal ip. I fixed it for --network pasta so that it would never add the same ip as pasta there. If there is no second host ip that could be used instead it would not add the host entry which should lead to more meaningful error (name does not exists vs ip is not what you expect) which is tracked in #19213.

@sbrivio-rh
Copy link
Collaborator

sbrivio-rh commented May 13, 2024

First using the default interface ip to connect to the host no longer works with pasta by default because pasta uses the same ip inside the namespace so the cotnainer cannot connect to that

Maybe the work in progress to make forwarding more flexible will make this less of a problem, as we'll probably be able to say stuff like "splice container socket to host socket connected to port 5000 and source 2001:fd8::1`. But anyway, it's not necessarily going to be magic and intuitive, so, let's consider the current situation, as it won't be necessarily very different with this regard.

There are pretty much five ways to connect to a service running on the host, with pasta:

  • pass --map-gw (already mentioned on this ticket, Host unreachable from container with bridge network on Podman v5 #22653 (comment)) and use the address of the default gateway
    • cons (yeah I'm a positive person):
      • counterintuitive (see https://www.reddit.com/r/podman/comments/1c46q54/comment/kztjos7/), but DNS could hide this
      • you can't actually connect to the default gateway, should you have any service running there (uncommon?)
      • maps all the ports while you might want just some, so it's perhaps not the best idea, security-wise
      • needlessly translates between Layer-2 and Layer-4 even for local connections, lower throughput than direct socket splicing
    • pros:
      • it's a single configuration option
      • traffic from the container doesn't look like local traffic
  • explicitly map ports using -T / --tcp-ns, and connect to localhost from container
    • cons:
      • you need to know which ports will be used beforehand
      • traffic from the container looks local (well, it is, but it shouldn't look like it, because otherwise it's not... contained). Think of reverse-CVE-2021-20199
    • pros:
      • very low overhead as data is directly spliced between sockets
      • only exposes required ports
  • use IPv6 and link-local addresses:
    • cons:
      • many users might be unfamiliar with IPv6 (note that it doesn't actually require public IPv6 connectivity)
      • needlessly translates between Layer-2 and Layer-4 even for local connections
    • pros:
      • crystal clear semantics: the address is local to the link
      • no configuration needed
  • assign a different address to the containers compared to default address copied from the host (implies NAT)
    • cons:
      • ...well, NAT
      • needlessly translates between Layer-2 and Layer-4 even for local connections
      • needs special, somewhat arbitrary configuration
    • pros:
      • it's like it used to be with slirp4netns (just more flexible)
      • the destination address is actually assigned to an interface on the host, so it should all make sense
  • use the address of another interface, or another address on the same host interface
    • cons:
    • pros:
      • maybe, security-wise, requiring root to set up an additional address that can be used to connect to the host is a good idea
      • it's still intuitive enough that quite a few folks seem to have figured it out already
      • should play nicely with DNS

I think it would help if we pick one of those as recommended solution, eventually.

@coolbry95
Copy link

coolbry95 commented May 14, 2024

I am going with use the address of another interface, or another address on the same host interface because I happen to have a second physical address. I am also using DNS for my containers so I can continue to use the DNS. It would be nice to explore other ways of achieving the same thing with this method. I managed to just add a static IPv6 address to the same interface podman/pasta is using. I just incremented the current IP by 1. I do not know if this is an ok thing to do. This was also just for testing purposes for me.

@Luap99 Luap99 added network Networking related issue or feature pasta pasta(1) bugs or features labels May 14, 2024
@Luap99
Copy link
Member

Luap99 commented May 14, 2024

use the address of another interface, or another address on the same host interface

That is what the code is supposed to do today when trying to create a ip for host.containers.internal, however for the bridge network mode it does not work currently. I try to fix this part.

However I think the actual issue is still #19213, we need a way to map a arbitrary ip in the netns to the host ip and than expose it as host.containers.internal to the container.

@sbrivio-rh
Copy link
Collaborator

However I think the actual issue is still #19213, we need a way to map a arbitrary ip in the netns to the host ip and than expose it as host.containers.internal to the container.

...that is, just like --map-gw (or lack of --no-map-gw), but with an arbitrary address, right? That mapping doesn't change the source address to a loopback address (unlike -T). The source address would be the address of a local interface, but not loopback.

@Luap99
Copy link
Member

Luap99 commented May 14, 2024

However I think the actual issue is still #19213, we need a way to map a arbitrary ip in the netns to the host ip and than expose it as host.containers.internal to the container.

...that is, just like --map-gw (or lack of --no-map-gw), but with an arbitrary address, right? That mapping doesn't change the source address to a loopback address (unlike -T). The source address would be the address of a local interface, but not loopback.

Yes it is important that the address is not localhost, it must be impossible for such a mapping to reach the hosts localhost address, it must only work for services listing on the external interface.

@dimazest
Copy link

dimazest commented May 17, 2024

I faced an issue with pasta and Wireguard on a Fedora Core OS when it updated to 40.

I have a pod that runs a container with a wg0 interface. The image I use is docker.io/procustodibus/wireguard.

This wg config works with both pasta and slirp4netns

# Container interface.
[Interface]
...
# ListenPort is not set

[Peer]
...
Endpoint = wg.example.com:30104
PersistentKeepalive = 25

In this setup, container connects to a peer and keeps a connection open. Both peers can ping each other.

This setup doesn't work with pasta

# Container interface.
[Interface]
...
ListenPort = 34344

[Peer]
...
Endpoint = wg.example.com:30104

Port 34344 is published when I start a container.

With pasta there is no wg tunnel and peers can't ping each other. Switching to slirp4netns without changing anything else solves the issue.

@Luap99
Copy link
Member

Luap99 commented May 17, 2024

@dimazest I don't see how this is related to this issue here?
If you have a specific issue with your wireguard config then please file a new one with steps on how to reproduce.

Luap99 added a commit to Luap99/libpod that referenced this issue May 17, 2024
We have to exclude the ips in the rootless netns as they are not the
host. Now that fix only works if there are more than one ip one the
host available, if there is only one we do not set the entry at all
which I consider better as failing to resolve this name is a much better
error for users than connecting to a wrong ip. It also matches what
--network pasta already does.

Fixes containers#22653

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Luap99 added a commit to Luap99/libpod that referenced this issue May 17, 2024
We have to exclude the ips in the rootless netns as they are not the
host. Now that fix only works if there are more than one ip one the
host available, if there is only one we do not set the entry at all
which I consider better as failing to resolve this name is a much better
error for users than connecting to a wrong ip. It also matches what
--network pasta already does.

The test is bit more compilcated as I would like, however it must deal
with both cases one ip, more than one so there is no way around it I
think.

Fixes containers#22653

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
openshift-cherrypick-robot pushed a commit to openshift-cherrypick-robot/podman that referenced this issue May 20, 2024
We have to exclude the ips in the rootless netns as they are not the
host. Now that fix only works if there are more than one ip one the
host available, if there is only one we do not set the entry at all
which I consider better as failing to resolve this name is a much better
error for users than connecting to a wrong ip. It also matches what
--network pasta already does.

The test is bit more compilcated as I would like, however it must deal
with both cases one ip, more than one so there is no way around it I
think.

Fixes containers#22653

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
@devurandom
Copy link

devurandom commented May 23, 2024

@lazyzyf Maybe this helps: I am experiencing this issue with service running with podman-compose, but found a workaround with some help from the comments above.

I start a HTTP server on the host with python -m http.server -b 0.0.0.0 9000.

I execute curl on the host:

❯ curl -vv http://192.168.[REDACTED]:9000/test
*   Trying 192.168.[REDACTED]:9000...
* Connected to 192.168.[REDACTED] (192.168.[REDACTED]) port 9000
> GET /test HTTP/1.1
> Host: 192.168.[REDACTED]:9000
> User-Agent: curl/8.6.0
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 404 File not found
< Server: SimpleHTTP/0.6 Python/3.12.3
< Date: Thu, 23 May 2024 13:28:55 GMT
< Connection: close
< Content-Type: text/html;charset=utf-8
< Content-Length: 335
<
<!DOCTYPE HTML>
<html lang="en">
    <head>
        <meta charset="utf-8">
        <title>Error response</title>
    </head>
    <body>
        <h1>Error response</h1>
        <p>Error code: 404</p>
        <p>Message: File not found.</p>
        <p>Error code explanation: 404 - Nothing matches the given URI.</p>
    </body>
</html>
* Closing connection

I execute curl in a container in the compose environment:

curl -vv http://host.containers.internal:9000/test
*   Trying 192.168.[REDACTED]:9000...
* connect to 192.168.[REDACTED] port 9000 failed: Connection refused
* Failed to connect to host.containers.internal port 9000 after 0 ms: Couldn't connect to server
* Closing connection 0
curl: (7) Failed to connect to host.containers.internal port 9000 after 0 ms: Couldn't connect to server

192.168.[REDACTED] is identical with the (only) inet address of the host's primary network interface (cf. output of ip address).

I set the following in ~/.config/containers/containers.conf:

[network]
default_rootless_network_cmd = "slirp4netns"

See https://blog.podman.io/2024/03/podman-5-0-breaking-changes-in-detail/ section "Pasta default for rootless networking".

After podman-compose down and podman-compose up I can connect to the host from the container:

curl -vv http://host.containers.internal:9000/test
*   Trying 192.168.[REDACTED]:9000...
* Connected to host.containers.internal (192.168.[REDACTED]) port 9000 (#0)
> GET /test HTTP/1.1
> Host: host.containers.internal:9000
> User-Agent: curl/7.88.1
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 404 File not found
< Server: SimpleHTTP/0.6 Python/3.12.3
< Date: Thu, 23 May 2024 13:51:44 GMT
< Connection: close
< Content-Type: text/html;charset=utf-8
< Content-Length: 335
<
<!DOCTYPE HTML>
<html lang="en">
    <head>
        <meta charset="utf-8">
        <title>Error response</title>
    </head>
    <body>
        <h1>Error response</h1>
        <p>Error code: 404</p>
        <p>Message: File not found.</p>
        <p>Error code explanation: 404 - Nothing matches the given URI.</p>
    </body>
</html>
* Closing connection 0

Instead of reverting to slirp4netns, setting the following in ~/.config/containers/containers.conf also works, as mentioned in the article linked above:

[network]
pasta_options = ["--address", "10.0.2.0", "--netmask", "24", "--gateway", "10.0.2.2", "--dns-forward", "10.0.2.3"]

This appears to work independently of the IP address and network mask used by the container.

My system:

❯ grep PLATFORM /etc/os-release
PLATFORM_ID="platform:f40"

❯ podman-compose --version
podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 5.0.3
podman-compose version 1.0.6
podman --version
podman version 5.0.3
exit code: 0

I came here from #22724.

@MathieuMoalic
Copy link

Is this actually fixed? Although @sbrivio-rh 's answer seems like there are so many solutions but I haven't been able to make any of them work.
podman 4.9: podman run --rm -it --network my-bridge-network nginx curl [local-ip]:8080 -> works
podman 5.1: podman run --rm -it --network my-bridge-network nginx curl [local-ip]:8080 -> doesn't work.

It seems like all the solutions in this issue involve not using a bridge network, even though the title specifically calls for it.
Adding

[network]
default_rootless_network_cmd = "slirp4netns"

to ~/.config/containers/containers.conf works for podman run --rm nginx curl [local-ip]:8080 but as soon as we add a bridge network (podman run --rm -it --network my-bridge-network nginx curl [local-ip]:8080), it stops working.
Same for whatever pasta options were mentionned in this thread, they are ignored/incompatible with additional bridge networks.

I want to do something that seems so easy: proxy a webapp from the host, using a caddy container which is in a bridge network with other containers. I can't be the only one strugling with this.

@sbrivio-rh
Copy link
Collaborator

Is this actually fixed?

Well, it looks like the implications of pasta copying host addresses into the containers are catching some users by surprise, so we have to come up with something nicer, for example giving the ability to configure an arbitrary address representing the host (see #22653 (comment)). Preconditions for that are work in progress.

I want to do something that seems so easy: proxy a webapp from the host, using a caddy container which is in a bridge network with other containers. I can't be the only one strugling with this.

What happens if you assign a secondary address on the host, and use that to reach the host from your container? That should work even with a bridge and pasta or slirp4netns in between.

@MathieuMoalic
Copy link

Thanks, indeed that works!

@Kybeer
Copy link

Kybeer commented Jul 11, 2024

[network]
default_rootless_network_cmd = "slirp4netns"

This solve my issue as well

@PhrozenByte
Copy link
Contributor

explicitly map ports using -T / --tcp-ns, and connect to localhost from container

@sbrivio-rh How do I actually configure that (podman version 5.2.1)? In my case this is the best solution because I actually want the traffic to look local and I know the ports (ports 80 and 443), but adding a $XDG_CONFIG_HOME/containers/containers.conf.d/pasta.conf with

[network]
default_rootless_network_cmd="pasta"
pasta_options=[
    "-T 80,443",
]

fails with

podman[190831]: Error: pasta failed with exit code 1:
podman[190831]: Port forwarding mode 'none' conflicts with previous mode

From the PR merging passt support I vaguely remember that Podman passes some default options to passt, including -T none. Apparently they are added dynamically and not by utilizing the pasta_options config? How do I overwrite them then? Related to #22477?

@Luap99
Copy link
Member

Luap99 commented Sep 17, 2024

This needs to be ["-T", "80,443"] as they are separate arguments.

@PhrozenByte
Copy link
Contributor

Didn't even think about that. Now I'm asking myself why I didn't, it's kinda obvious 🙈 Sorry for the noise!

However, unfortunately it's still not working. My webserver is running in another container (root and with --net bridge), the ports are exposed using --publish 116.203.33.181:80:80/tcp --publish 116.203.33.181:443:443/tcp (116.203.33.181 being the server's public IP, also being the only IPv4 of the enp1s0 interface). I now fixed my pasta config to match ["-T", "80,443"], but curl still fails to connect:

curl: (7) Failed to connect to daniel-rudolf.de port 443 after 0 ms: Could not connect to server

"daniel-rudolf.de" resolves to 116.203.33.181, and 116.203.33.181 is also copied to the container just fine (checked using ip addr).

What am I missing?

@sbrivio-rh
Copy link
Collaborator

What am I missing?

I think what doesn't work in your case is that you're binding ports 80 and 443 using a specific address, 116.203.33.181, but with pasta's -T, the traffic from the containers appear as if coming from localhost (127.0.0.1 or ::1), because -T is pretty much a local traffic bypass.

You can either bind those ports with an additional --publish 127.0.0.1:80:80/tcp ..., or use a non-loopback address to connect to your host from the container (you don't need -T at that point).

By default, the host is represented on the tap device (not on lo, that is, not because of -T) as the address of the default gateway (as reported by ip route show, ip -6 route show), but now you can change that with --map-guest-addr ADDR, so that traffic from the container, directed to ADDR, is mapped on the host as coming from a non-local address (116.203.33.181 in your case).

@PhrozenByte
Copy link
Contributor

You can either bind those ports with an additional --publish 127.0.0.1:80:80/tcp ...

Thanks @sbrivio-rh! Unfortunately it still doesn't work. The webserver container is now additionally listening on 127.0.0.1, the ports are forwarded using -T 80,443, but no luck:

$ curl -v4 'http://daniel-rudolf.de'
* Host daniel-rudolf.de:80 was resolved.
* IPv6: (none)
* IPv4: 116.203.33.181
*   Trying 116.203.33.181:80...
* connect to 116.203.33.181 port 80 from 116.203.33.181 port 49780 failed: Connection refused
* Failed to connect to daniel-rudolf.de port 80 after 2 ms: Could not connect to server
* closing connection #0
curl: (7) Failed to connect to daniel-rudolf.de port 80 after 2 ms: Could not connect to server

Even if I change the webserver container to --publish 0.0.0.0:80:80/tcp … it doesn't work.

I'm obviously doing something wrong, any hints? 😕

By default, the host is represented on the tap device (not on lo, that is, not because of -T) as the address of the default gateway (as reported by ip route show, ip -6 route show), but now you can change that with --map-guest-addr ADDR, so that traffic from the container, directed to ADDR, is mapped on the host as coming from a non-local address (116.203.33.181 in your case).

I'm not 100% sure whether I understand you right here. What you're saying is that by using --map-guest-addr ADDR one can specify an ADDR that allows connecting to the host as if the connection is coming from a non-local address and without the need of -T. However, this ADDR can't be my primary IP address (i.e. 116.203.33.181), but must be an arbitrarily chosen non-local address. Correct?

I tested it with --map-guest-addr 192.168.23.45 and indeed, it works ✌️ 👍 Notably I also neither needed -T nor --publish 127.0.0.1:80:80/tcp ….

$ curl -v4 'http://192.168.23.45'
*   Trying 192.168.23.45:80...
* Connected to 192.168.23.45 (192.168.23.45) port 80
> GET / HTTP/1.1
> Host: 192.168.23.45
> User-Agent: curl/8.9.1
> Accept: */*
> 
* Request completely sent off
< HTTP/1.1 200 OK
< Date: Wed, 18 Sep 2024 16:21:17 GMT
< Server: Apache/2.4
< Upgrade: h2
< Connection: Upgrade
< Last-Modified: Fri, 28 May 2021 12:28:39 GMT
< ETag: "321-5c36303f32bc0"
< Accept-Ranges: bytes
< Content-Length: 801
< Content-Type: text/html
< 
<!DOCTYPE html>
…

However, this poses a practical issue: I then need a DNS server (or a /etc/hosts file) that yields said different IP address instead of my public IP address inside the container. Is there any simple solution to this or do I have to do that manually?

or use a non-loopback address to connect to your host from the container (you don't need -T at that point).

You mean by adding another network interface and using this interface's address? Because adding another address to the same interface shares the same faith as the public IP address after restarting the container: it is copied to the container.

I just tested it by using the IP address of my wireguard interface and it indeed works (also requiring --publish 0.0.0.0:80:80/tcp … for the webserver container), even without -T. Using the wireguard interface naturally is no permanent solution. However, there's no real advantage to --map-guest-addr anyway, right?

@Luap99
Copy link
Member

Luap99 commented Sep 18, 2024

However, this poses a practical issue: I then need a DNS server (or a /etc/hosts file) that yields said different IP address instead of my public IP address inside the container. Is there any simple solution to this or do I have to do that manually?

In the upcoming podman v5.3 release we will set --map-guest-addr by default for pasta and add the host.containers.internal host entry to that address (#23791)

@PhrozenByte
Copy link
Contributor

However, this poses a practical issue: I then need a DNS server (or a /etc/hosts file) that yields said different IP address instead of my public IP address inside the container. Is there any simple solution to this or do I have to do that manually?

In the upcoming podman v5.3 release we will set --map-guest-addr by default for pasta and add the host.containers.internal host entry to that address (#23791)

That's a good thing for sure, but having some hostname for --map-guest-addr ADDR isn't really giving me headaches. What gives me headaches are my actual domains: The container should be able to connect to the webserver using my regular domains (e.g. the mentioned daniel-rudolf.de, but there are some more domains associated with this server). So, unless I'm missing something, I'm required to setup a local DNS server that yields this ADDR for my domains - Podman can't really help me here because it doesn't know these domains either.

If there's really no other solution I'll unavoidably go with it (switching back to slirp4netns is no solution either because NAT makes things harder inside the container). But I'd like to investigate -T a little more, because the cons @sbrivio-rh mentioned in #22653 (comment) really are no issue in my surely rather special case. But as I said in #22653 (comment), it's not working for some reason and I'm totally stuck.

@sbrivio-rh
Copy link
Collaborator

Thanks @sbrivio-rh! Unfortunately it still doesn't work. The webserver container is now additionally listening on 127.0.0.1, the ports are forwarded using -T 80,443, but no luck:

$ curl -v4 'http://daniel-rudolf.de'
* Host daniel-rudolf.de:80 was resolved.
* IPv6: (none)
* IPv4: 116.203.33.181
*   Trying 116.203.33.181:80...
* connect to 116.203.33.181 port 80 from 116.203.33.181 port 49780 failed: Connection refused

Wait, you shouldn't use that address as destination, from the container. You should either:

  • use the address of your default gateway (whatever ip route show or ip -6 route show gives you), or pick an address with --map-guest-addr. Otherwise you're connecting to the container itself (because it also has that address), but not via lo because you're using a non-loopback address
  • with -T, use a loopback address (127.0.0.1 or ::1): pasta will catch connections on lo and forward them to the host

By default, the host is represented on the tap device (not on lo, that is, not because of -T) as the address of the default gateway (as reported by ip route show, ip -6 route show), but now you can change that with --map-guest-addr ADDR, so that traffic from the container, directed to ADDR, is mapped on the host as coming from a non-local address (116.203.33.181 in your case).

I'm not 100% sure whether I understand you right here. What you're saying is that by using --map-guest-addr ADDR one can specify an ADDR that allows connecting to the host as if the connection is coming from a non-local address and without the need of -T. However, this ADDR can't be my primary IP address (i.e. 116.203.33.181), but must be an arbitrarily chosen non-local address. Correct?

Correct.

I just tested it by using the IP address of my wireguard interface and it indeed works (also requiring --publish 0.0.0.0:80:80/tcp … for the webserver container), even without -T. Using the wireguard interface naturally is no permanent solution. However, there's no real advantage to --map-guest-addr anyway, right?

Right, in your case, given that you want connections to look like local ones, there's no real advantage.

What gives me headaches are my actual domains: The container should be able to connect to the webserver using my regular domains (e.g. the mentioned daniel-rudolf.de, but there are some more domains associated with this server). So, unless I'm missing something, I'm required to setup a local DNS server that yields this ADDR for my domains - Podman can't really help me here because it doesn't know these domains either.

That's a new use case, I never thought of it. Well, I run virtual machines (with passt(1)) on passt.top and I often need to fetch git://passt.top/passt/ from those virtual machines, so I'm just used to add an entry to /etc/hosts for passt.top with the address of the default gateway. With a virtual machine, there's no other solution.

But for containers, this is new to me, and I guess we could actually bind ports in the container without binding specifically to lo, when -T is given. I'll look into this.

By the way, for the moment being, you don't really need a DNS server just for a few domains, you could add entries to /etc/hosts that are specific for your containers.

@PhrozenByte
Copy link
Contributor

PhrozenByte commented Sep 18, 2024

Wait, you shouldn't use that address as destination, from the container. You should either:

  • use the address of your default gateway (whatever ip route show or ip -6 route show gives you), or pick an address with --map-guest-addr. Otherwise you're connecting to the container itself (because it also has that address), but not via lo because you're using a non-loopback address

  • with -T, use a loopback address (127.0.0.1 or ::1): pasta will catch connections on lo and forward them to the host

Ah! That's it! 🙈

I just tried the following solutions (as reference for others) and they both work flawlessly, thanks @sbrivio-rh! 👍 ❤️

  • Keeping just the original --publish 116.203.33.181:80:80/tcp … for the webserver container, adding --map-guest-addr 192.168.23.45 to pasta and --add-host daniel-rudolf.de:192.168.23.45 to Podman yields a successful curl -v4 'https://daniel-rudolf.de
  • Adding --publish 127.0.0.1:80:80/tcp … to the webserver container, adding -T 80,443 to pasta and --add-host daniel-rudolf.de:127.0.0.1 to Podman also yields a successful curl -v4 'https://daniel-rudolf.de

However, using the gateway address (i.e. without --map-guest-addr) doesn't work for me (even with --publish 0.0.0.0:80:80/tcp … for the webserver container):

$ ip route show
default via 172.31.1.1 dev enp1s0  metric 100 
172.31.1.1 dev enp1s0 scope link  metric 100
$ curl -v -m10 'http://172.31.1.1'
*   Trying 172.31.1.1:80...
* Connection timed out after 10002 milliseconds
* closing connection #0
curl: (28) Connection timed out after 10002 milliseconds

I'm not going to use that variant (because --map-guest-addr is always better anyway I guess?), but I just want to let you know that for some reason stuff again isn't working for me (I must be jinxed 🙈).

But for containers, this is new to me, and I guess we could actually bind ports in the container without binding specifically to lo, when -T is given. I'll look into this.

That would be just perfect 👍 ☺️

In the meantime I'll go with --map-guest-addr because I don't want to permanently change the webserver container's setup. This works for now because you're absolutely right that a DNS server for just a few domains is a little overkill. However, pasta not binding to lo and therefore also accepting connections to 116.203.33.181 would be a much appreciated change nevertheless. I'd switch over to -T then because it would be the perfect solution for me then ❤️

In the upcoming podman v5.3 release we will set --map-guest-addr by default for pasta and add the host.containers.internal host entry to that address (#23791)

@Luap99 Since I finally understand what's going on I'd like to suggest adding a Podman CLI option similar to --add-host to add hosts to /etc/hosts with the ADDR of --map-guest-addr. How about using --add-host with a magic "IP address", e.g. --add-host daniel-rudolf.de:gw? This could work not only for pasta networks, but also for some (all?) other network modes; for e.g. --net=bridge it would choose 10.88.0.1 (depending on the subnet). Because otherwise I can't be sure about what ADDR Podman chooses, requiring me to add --map-guest-addr myself. No big deal, but nice to have I guess. What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. network Networking related issue or feature pasta pasta(1) bugs or features
Projects
None yet
Development

Successfully merging a pull request may close this issue.