-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stop leaking 10.137.x.x addresses to VMs #1143
Comments
It's trivial in many ways to identify a qubes user. |
The problem is, two different attack vectors were mixed into this ticket.
|
Yes, the important issue is the protocol level leak (and also the related "leak into logs and problem reports" issue).
That's like saying "if you want to be secure, don't use remotely exploitable software". The issue is that it's impossible to know how software behaves (the Firefox WebRTC leak was probably a surprise to everyone not directly involved), and you may want to run misbehaving software anyway.
The two important goals are these (assuming that the internal IP address is being sent over the network along with any traffic):
There seem to be two ways to achieve these goals:
If the default is a single IP address, then it should be possible to choose an alternative address or random choice in a subnet instead since some users they want to access other hosts in their LAN with conflicting IP addresses that they want to connect to from VMs. It should be possible to do so on a per-VM basis.
That's true, making Qubes undetectable from an exploited VM is not feasible. However, having a generic IP address might slightly mitigate against untargeted fingerprinting/surveillance malware not designed to explicitly detect Qubes that is likely to record the internal IP address but might be less likely to record anything else relevant. |
No, it's saying, if you want to be secure learn about the risks and avoid unnecessary ones.
This is a good example, because the risks of WebRTC leakage were well known - for example, TBB hasn't included it for the last three years. And if you choose to run misbehaving software then you're just playing. That said, I don't like the 10.137 addresses, and don't generally use them. I think whonix uses 10.152.152 - I cant say I like that. I do like the idea of reassigning IP address on each VM boot. |
You might need to use the software. For example, you might be maintaining a website anonymously and need to test it for compatibility in all browsers (perhaps including older versions with known security holes) with JavaScript and Internet access enabled, and obviously not have the resources to audit and patch all browsers. That's the kind of thing that Qubes and Whonix should allow you to do safely and conveniently.
Is there an easy way to change that in current Qubes other than locally patching the Python code?
Yes, I think non-Qubes Whonix should be changed to use the same scheme that Qubes uses, if possible. |
There is already too much required knowledge to stay safe. And stuff like WebRTC, torrent, local IP leak means nothing to novice users. It cannot realistically be expected to have a considerable fraction of users being aware of it and acting accordingly. Ideally we could solve such issues, so there is nothing to be pointed out to users, nothing they can do wrong. Secure by default. |
I implemented this in the 3 pull requests that should show up on this page. This code adds a new "ip_address_mode" variable to qvm-prefs, with these possible values:
The IP address set this way is separate from the 10.137.x.x IP address that is then seen in the ProxyVM and everywhere else in the system, which is unchanged, and there is a mechanism that translates between them. The mechanism works by requesting the new "vif-route-qubes-nat" instead of "vif-routes-qubes", which then sets up a network namespace in the proxy VM for each VIF with iptables SNAT/DNAT rules to translate between the addresses. Since this is all done in a separate network namespace, there are no changes visible from the normal ProxyVMs/NetVMs environment, except for the fact that vif#.# is now a veth instead of a physical interface (the actual physical interface is hidden inside the network namespace). Custom firewall rules, port mappings and custom ProxyVM iptables rules should thus continue to work unchanged. The only action required is to upgrade core-agent-linux in all VMs that are netvms for AppVMs and reconfigure networking on all HVMs or set them to the internal IP address mode. The code seems to work, but it could definitely use some review. |
For Qubes-Whonix we really could use static IP addresses. The current dpkg-trigger/search/replace of internal IP addresses in config files [that don't support variables] approach is really something I don't like because of the extra complexity layer that adds on top. Although ideally we could keep the 10.152.152.10 IP range. Was painful when we changed from 192 to 10. (https://forums.whonix.org/t/implemented-feature-request-suggestion-gw-network-settings/118) More scalable. Provides more IP addresses. Leaks even less likely, due to different subnets. Less confusion by "does it conflict with my router". |
@qubesuser maybe instead of all that network namespace code, we could simply SNAT "invalid" addresses to the one expected by ProxyVM/NetVM? This would go instead of the current DROP rule in |
The problem I see with 10.152.152.10 is that if it gets leaked over the network, it reveals that Whonix is being used, which most likely significantly reduces the anonymity set compared to just knowing that Tor is being used.
Intra-AppVM traffic happens via a different addressing scheme, which is currently the 10.137.x.x already in use by Qubes (but by default none of these addresses should be visible to preserve anonymity). It would be nice to make this scheme configurable, but that's a separate issue.
What does this mean exactly? Subnets different from which other subnet? How does that impact leaks?
Anonymous VMs cannot access the home router anyway since everything is routed to Tor. There might indeed be an issue with non-anonymous VM and non-technical users not realizing why the router is not accessible. Could maybe run an HTTP server with an help message in the AppVM and redirect connections to port 80/443 to it. |
@marmarek That was the first thing I tried, but the problem is that SNAT can only be used in POSTROUTING in the nat table, and there doesn't seem to be any other "patch source address" or "patch arbitrary u32" functionality in iptables. Without a way of patching the address in the raw table a network namespace seems the best solution since both conntrack and routing needs to be separate from the main system and it also nicely encapsulates the whole thing keeping compatibility. But maybe there's some other way I missed. |
On Sun, Aug 30, 2015 at 05:48:35AM -0700, qubesuser wrote:
If that would be generic "anonymous" IP, it would only leak that some
Yes, exactly. And this could be hard to debug for non-technical users. I think the single-IP scheme could be extended to general use, not only Anyway, this is surely too late for such change to go into R3.0, queued Best Regards, |
Whonix works also outside of Qubes. [Such as data centers using physically isolated Whonix-Gateway. Maybe one day we also have a physically isolated Qubes-Whonix-Gateway. And no, deprecating all non-Qubes would be a bad strategic move. Would kill a major source of scrutiny.]
Constantly calming people, explaining this, to stop the FUD is also a waste of project time.
Internal network interface, external network interface. Connecting different subnets doesn't happen by accident. Search term: "connect different subnet" |
I still think that using network namespaces are unnecessary complex Such a change could be combined with creating separate chains for each But before that we need to think if we really can abandon guarding IP
Best Regards, |
This assumes that the firewall is only operating at 1 level depth, but there are use cases where this is not so - e.g fw in front of torVM. In that case, to achieve stream isolation you cant use MASQUERADE or interface name at the torVM. |
@marmarek I did it like this to preserve compatibility with existing firewall setups, not require custom kernel modules or routing packets to userspace and to have a less weird configuration (the network namespace itself is a bit weird, but you can think it as "embedding" a VM between the ProxyVM/NetVM and the AppVM and then it's just normal NAT routing). If the firewall design is completely changed, then the single MASQUERADE will indeed work. There is still the issue that DNAT can only be used in prerouting, which means that you need to perform the routing decision with iptables rules in the PREROUTING chain (so that you know what to DNAT to) and then MARK the packet to pass the routing decision to Linux routing, and remove routes based on IP addresses. The advantage of this scheme is that it's faster and does not require having the 10.137.x.x addresses at all, and allows to either have no internal addresses or have any scheme including using IPv6 addresses. The disadvantage is breaking compatibility and having a very unconventional routing setup. It might indeed be necessary to patch Tor to isolate based on incoming interface as well as IP though (actually it would be even nicer to be able to embed an "isolation ID" in an IP option so that stream isolation works with chained ProxyVMs and patch everything to use it, not sure if realistic).
Having 256 addresses should suffice though, if someone needs more they can probably afford to spend some time renumbering.
I think it should be possible to route packets to an external 192.168.1.1 even if it's the internal gateway, except for the DNS and DHCP ports, and that should be enough for accessing the home router. It should also be possible on Linux AppVMs to route packets to 192.168.1.128 externally even if it's the address of the local interface (by removing it from the "local" routing table), but that might break some software that doesn't expect that.
Ah you mean accidentally routing packets to the external network bypassing Tor because it has the same subnet used for private addressing. That's indeed worrying, but proper firewall/routing should prevent that and it can be an issue regardless of IP choice if the external network happens to have the same IP. |
It should. Sure. But sometimes there are obscure bugs and leaks. Unseen by anyone for years. This one is the best example coming to my mind:
That's why I appreciate the extra protection by separate subnets.
Yes. And womehow included free ARP spoofing defense preventing impersonating other internal LAN IPs would be a bonus.
Yes. ( Static IP addresses are very useful for Whonix. Such as for setting up Tor hidden services. And other stuff. But if you can abolish that need with iptables skills, I am all ears. |
For tracking purposes, what is the current status of this issue? @qubesuser's three pull requests are still open. Does more work need to be done before the PRs can be merged? |
Keep "main" IP (the one in xenstore) as the one seen by the netvm, and pass the "fake" one (the one seen by the VM) as script parameter. Fixes QubesOS/qubes-issues#1143
Since 'script' xenstore entry no longer allows passing arguments (actually this always was a side effect, not intended behaviour), we need to pass additional parameters some other way. Natural choice for Qubes-specific script is to use QubesDB. And since those parameters are passed some other way, it is no longer necessary to keep it as separate script. Fixes QubesOS/qubes-issues#1143
Even when it's veth pair into network namespace doing NAT. QubesOS/qubes-issues#1143
This helps hiding VM IP for anonymous VMs (Whonix) even when some application leak it. VM will know only some fake IP, which should be set to something as common as possible. The feature is mostly implemented at (Proxy)VM side using NAT in separate network namespace. Core here is only passing arguments to it. It is designed the way that multiple VMs can use the same IP and still do not interfere with each other. Even more: it is possible to address each of them (using their "native" IP), even when multiple of them share the same "fake" IP. Original approach (marmarek/old-qubes-core-admin#2) used network script arguments by appending them to script name, but libxl in Xen >= 4.6 fixed that side effect and it isn't possible anymore. So use QubesDB instead. From user POV, this adds 3 "features": - net/fake-ip - IP address visible in the VM - net/fake-gateway - default gateway in the VM - net/fake-netmask - network mask The feature is enabled if net/fake-ip is set (to some IP address) and is different than VM native IP. All of those "features" can be set on template, to affect all of VMs. Firewall rules etc in (Proxy)VM should still be applied to VM "native" IP. Fixes QubesOS/qubes-issues#1143
Done. See marmarek/qubes-core-admin@2c6c476 message for details. Will need to be documented (based on this ticket and that commit message) |
@adrelanos this require cooperation from ProxyVM. If ProxyVM do not cooperate (for example outdated package, or no support for this feature - like in MirageOS currently), AppVM may learn its "real" IP (10.137.x.y), or more likely have not working network. I see two options:
Separate issue is how to detect whether such ProxyVM support this feature, but it can be done similar way as Windows tools are detected - VM (template in this case) will expose supported features in QubesDB at startup and it will be recorded as VMs based on this template do support it. BTW the same mechanism can be used to configure Whonix-specific defaults for VM (like enabling this feature automatically). |
This is the IP known to the domain itself and downstream domains. It may be a different one than seen be its upstream domain. Related to QubesOS/qubes-issues#1143`
Set parameters for possibly hiding domain's real IP before attaching network to it, otherwise we'll have race condition with vif-route-qubes script. QubesOS/qubes-issues#1143
Even when it's veth pair into network namespace doing NAT. QubesOS/qubes-issues#1143
Core3 no longer reuse netvm own IP for primary DNS. At the same time, disable dropping traffic to netvm itself because it breaks DNS (as one of blocked things). This allows VM to learn real netvm IP, but: - this mechanism is not intended to avoid detection from already compromised VM, only about unintentional leaks - this can be prevented using vif-qubes-nat.sh on the netvm itself (so it will also have hidden its own IP) QubesOS/qubes-issues#1143
Use /32 inside network namespace too. Otherwise inter-VM traffic is broken - as all VMs seems to be in a single /24 subnet, but in fact are not. QubesOS/qubes-issues#1143
Automated announcement from builder-github The package
|
Automated announcement from builder-github The package
|
Automated announcement from builder-github The package
|
Automated announcement from builder-github The package
|
Currently Qubes exposes 10.137.x.x address to VM, which means it's trivial to detect that someone is using Qubes, and it's also possible to tell which gateway a VM is using, as well as the order it was assigned the address.
Several applications such as Firefox with WebRTC support and BitTorrent clients leak the local IP address, so this can often be detected across the network in addition to within an exploited VM.
Instead, all VMs should use the most common local IP address in use (which I think should be either 192.168.1.2 or 192.168.1.128 with gateway 192.168.1.1, but some actual research should be done on this).
If the IP address conflicts with the numbering on a LAN the user wants to access, Qubes should allow to change the subnet: in this case it would be prudent to choose an address at random for each VM within the netmask specified by the user, to prevent correlation between VMs on the same host with non-default addressing (since detecting Qubes is going to be possible anyway on an exploited VM).
Firewall VMs should then NAT those addresses to an internal scheme such as 10.137.x.x (but I think 10.R.x.x where R is by default random and configurable is a better choice) so that internal networking and port forwarding can be supported.
The text was updated successfully, but these errors were encountered: