Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stop leaking 10.137.x.x addresses to VMs #1143

Closed
qubesuser opened this issue Aug 21, 2015 · 25 comments
Closed

Stop leaking 10.137.x.x addresses to VMs #1143

qubesuser opened this issue Aug 21, 2015 · 25 comments
Labels
C: core P: major Priority: major. Between "default" and "critical" in severity. privacy This issue pertains to data or information privacy through technological means. r4.0-fc24-cur-test r4.0-fc25-cur-test r4.0-jessie-cur-test r4.0-stretch-cur-test release notes This issue should be mentioned in the release notes.
Milestone

Comments

@qubesuser
Copy link

Currently Qubes exposes 10.137.x.x address to VM, which means it's trivial to detect that someone is using Qubes, and it's also possible to tell which gateway a VM is using, as well as the order it was assigned the address.

Several applications such as Firefox with WebRTC support and BitTorrent clients leak the local IP address, so this can often be detected across the network in addition to within an exploited VM.

Instead, all VMs should use the most common local IP address in use (which I think should be either 192.168.1.2 or 192.168.1.128 with gateway 192.168.1.1, but some actual research should be done on this).

If the IP address conflicts with the numbering on a LAN the user wants to access, Qubes should allow to change the subnet: in this case it would be prudent to choose an address at random for each VM within the netmask specified by the user, to prevent correlation between VMs on the same host with non-default addressing (since detecting Qubes is going to be possible anyway on an exploited VM).

Firewall VMs should then NAT those addresses to an internal scheme such as 10.137.x.x (but I think 10.R.x.x where R is by default random and configurable is a better choice) so that internal networking and port forwarding can be supported.

@unman
Copy link
Member

unman commented Aug 23, 2015

It's trivial in many ways to identify a qubes user.
The STUN problem in particular has been discussed on the lists. If this is an issue for you then you shouldn't be using WebRTC or torrents at all. If you're using Tor then you wouldn't be using them anyway.
It isnt quite clear to me exactly what you are proposing, or what risk it addresses, and I don't see how random addresses within a non-default subnet would prevent correlation. If anything it seems to make it more likely. That is, i think, why it's better for all users to use the same subnet.
On one detail, I believe that the most common router address is 192.168.1.X, although there are significant diversions (e.g. apple uses 10.0.0.1, MS 192.168.2.1, and DLink uses a range of other 192.168 and 10. addresses.)

@adrelanos
Copy link
Member

Several applications such as Firefox with WebRTC support and BitTorrent clients leak the local IP address, so this can often be detected across the network in addition to within an exploited VM.

The problem is, two different attack vectors were mixed into this ticket.

  • Exploited VM: the exploit can trivially easy detect, that it's running inside Qubes, because stuff like qubes-(core|gui)-agent is installed. And more stuff. Probably impossible to avoid.
  • protocol level leaks: (Example WebRTC, that you mentioned and others.) Legit. It would indeed be nice if the same local IP addresses as non-Qubes-Whonix would be used.

@qubesuser
Copy link
Author

Yes, the important issue is the protocol level leak (and also the related "leak into logs and problem reports" issue).

this is an issue for you then you shouldn't be using WebRTC or torrents at all

That's like saying "if you want to be secure, don't use remotely exploitable software".

The issue is that it's impossible to know how software behaves (the Firefox WebRTC leak was probably a surprise to everyone not directly involved), and you may want to run misbehaving software anyway.

It isnt quite clear to me exactly what you are proposing, or what risk it addresses, and I don't see how random addresses within a non-default subnet would prevent correlation. If anything it seems to make it more likely. That is, i think, why it's better for all users to use the same subnet.

The two important goals are these (assuming that the internal IP address is being sent over the network along with any traffic):

  1. "Avoid inter-VM correlation": prevent detecting that the network traffic generated by two AnonVMs in the same Qubes instance has the same author: this requires making sure that the common part of their IP addresses is used by a significant number of other Tor users
  2. "Avoid temporal correlation": prevent detecting that the network traffic generated today by an AnonVM has the same author as the traffic generated tomorrow: this requires making sure that its IP address is used by a significant number of other Tor users, or that the IP address is changed frequently among a subnet that is used by a significant number of other Tor users

There seem to be two ways to achieve these goals:

  1. Having all VMs for all users use the same very common private IP address (or one of a list of very few such addresses for all VMs). Best candidate is probably 192.168.1.128
  2. Randomly assign IP addresses from a large commonly used private subnet with the ability to exclude some ranges if needed, with different addresses for each VM and changing the address at least for every reboot of the VMs. Best candidate is probably 10.x.x.x

If the default is a single IP address, then it should be possible to choose an alternative address or random choice in a subnet instead since some users they want to access other hosts in their LAN with conflicting IP addresses that they want to connect to from VMs. It should be possible to do so on a per-VM basis.

Exploited VM: the exploit can trivially easy detect, that it's running inside Qubes, because stuff like qubes-(core|gui)-agent is installed. And more stuff. Probably impossible to avoid.

That's true, making Qubes undetectable from an exploited VM is not feasible.

However, having a generic IP address might slightly mitigate against untargeted fingerprinting/surveillance malware not designed to explicitly detect Qubes that is likely to record the internal IP address but might be less likely to record anything else relevant.

@unman
Copy link
Member

unman commented Aug 24, 2015

That's like saying "if you want to be secure, don't use remotely exploitable software".

No, it's saying, if you want to be secure learn about the risks and avoid unnecessary ones.

The issue is that it's impossible to know how software behaves (the Firefox WebRTC leak was probably a surprise to everyone not directly involved), and you may want to run misbehaving software anyway.

This is a good example, because the risks of WebRTC leakage were well known - for example, TBB hasn't included it for the last three years. And if you choose to run misbehaving software then you're just playing.
Qubes isn't a magic bullet. Tor isn't a magic bullet. Both require users to change their habits and learn new, more secure, ways of doing things. I don't see any way around this.

That said, I don't like the 10.137 addresses, and don't generally use them. I think whonix uses 10.152.152 - I cant say I like that. I do like the idea of reassigning IP address on each VM boot.

@qubesuser
Copy link
Author

And if you choose to run misbehaving software then you're just playing.

You might need to use the software. For example, you might be maintaining a website anonymously and need to test it for compatibility in all browsers (perhaps including older versions with known security holes) with JavaScript and Internet access enabled, and obviously not have the resources to audit and patch all browsers.

That's the kind of thing that Qubes and Whonix should allow you to do safely and conveniently.

That said, I don't like the 10.137 addresses, and don't generally use them.

Is there an easy way to change that in current Qubes other than locally patching the Python code?

I think whonix uses 10.152.152 - I cant say I like that.

Yes, I think non-Qubes Whonix should be changed to use the same scheme that Qubes uses, if possible.

@adrelanos
Copy link
Member

this is an issue for you then you shouldn't be using WebRTC or torrents at all

That's like saying "if you want to be secure, don't use remotely exploitable software".

No, it's saying, if you want to be secure learn about the risks and avoid unnecessary ones.

There is already too much required knowledge to stay safe. And stuff like WebRTC, torrent, local IP leak means nothing to novice users. It cannot realistically be expected to have a considerable fraction of users being aware of it and acting accordingly. Ideally we could solve such issues, so there is nothing to be pointed out to users, nothing they can do wrong. Secure by default.

@qubesuser
Copy link
Author

I implemented this in the 3 pull requests that should show up on this page.

This code adds a new "ip_address_mode" variable to qvm-prefs, with these possible values:

  • "internal": current behavior, set VM IP address to internal 10.137.x.x address
  • "anonymous": set VM IP address to 192.168.1.128/24
  • "custom": set VM IP address based on custom_ip_address/gateway/netmask qvm-pref variables
  • "auto": anonymous for AppVMs and HVM templates, internal for NetVMs, ProxyVMs, and PV templateVMs
  • Future work could also add "netvm" and "external" modes that would use the NetVM address or the externally visible IP address

The IP address set this way is separate from the 10.137.x.x IP address that is then seen in the ProxyVM and everywhere else in the system, which is unchanged, and there is a mechanism that translates between them.

The mechanism works by requesting the new "vif-route-qubes-nat" instead of "vif-routes-qubes", which then sets up a network namespace in the proxy VM for each VIF with iptables SNAT/DNAT rules to translate between the addresses.

Since this is all done in a separate network namespace, there are no changes visible from the normal ProxyVMs/NetVMs environment, except for the fact that vif#.# is now a veth instead of a physical interface (the actual physical interface is hidden inside the network namespace).

Custom firewall rules, port mappings and custom ProxyVM iptables rules should thus continue to work unchanged.

The only action required is to upgrade core-agent-linux in all VMs that are netvms for AppVMs and reconfigure networking on all HVMs or set them to the internal IP address mode.

The code seems to work, but it could definitely use some review.

@adrelanos
Copy link
Member

For Qubes-Whonix we really could use static IP addresses. The current dpkg-trigger/search/replace of internal IP addresses in config files [that don't support variables] approach is really something I don't like because of the extra complexity layer that adds on top.

Although ideally we could keep the 10.152.152.10 IP range. Was painful when we changed from 192 to 10. (https://forums.whonix.org/t/implemented-feature-request-suggestion-gw-network-settings/118) More scalable. Provides more IP addresses. Leaks even less likely, due to different subnets. Less confusion by "does it conflict with my router".

@marmarek
Copy link
Member

@qubesuser maybe instead of all that network namespace code, we could simply SNAT "invalid" addresses to the one expected by ProxyVM/NetVM? This would go instead of the current DROP rule in raw table.

@qubesuser
Copy link
Author

The problem I see with 10.152.152.10 is that if it gets leaked over the network, it reveals that Whonix is being used, which most likely significantly reduces the anonymity set compared to just knowing that Tor is being used.

Provides more IP addresses.
What for? As far as I can tell you only really need two (IP and gateway IP), since with this scheme different AppVMs connected to the same ProxyVM can share the same address.

Intra-AppVM traffic happens via a different addressing scheme, which is currently the 10.137.x.x already in use by Qubes (but by default none of these addresses should be visible to preserve anonymity). It would be nice to make this scheme configurable, but that's a separate issue.

Leaks even less likely, due to different subnets

What does this mean exactly? Subnets different from which other subnet? How does that impact leaks?

Less confusion by "does it conflict with my router".

Anonymous VMs cannot access the home router anyway since everything is routed to Tor.

There might indeed be an issue with non-anonymous VM and non-technical users not realizing why the router is not accessible.

Could maybe run an HTTP server with an help message in the AppVM and redirect connections to port 80/443 to it.

@qubesuser
Copy link
Author

@marmarek That was the first thing I tried, but the problem is that SNAT can only be used in POSTROUTING in the nat table, and there doesn't seem to be any other "patch source address" or "patch arbitrary u32" functionality in iptables.

Without a way of patching the address in the raw table a network namespace seems the best solution since both conntrack and routing needs to be separate from the main system and it also nicely encapsulates the whole thing keeping compatibility.

But maybe there's some other way I missed.

@marmarek marmarek added this to the Release 3.1 milestone Aug 30, 2015
@marmarek
Copy link
Member

On Sun, Aug 30, 2015 at 05:48:35AM -0700, qubesuser wrote:

The problem I see with 10.152.152.10 is that if it gets leaked over the network, it reveals that Whonix is being used, which most likely significantly reduces the anonymity set compared to just knowing that Tor is being used.

If that would be generic "anonymous" IP, it would only leak that some
AnonVM on Qubes is used. And I think there are many simpler ways to
learn that.

Less confusion by "does it conflict with my router".

Anonymous VMs cannot access the home router anyway since everything is routed to Tor.

There might indeed be an issue with non-anonymous VM and non-technical users not realizing why the router is not accessible.

Yes, exactly. And this could be hard to debug for non-technical users.

I think the single-IP scheme could be extended to general use, not only
AnonVMs. This would greatly ease network configuration code inside of VM
(static configuration using generic tools, instead of custom tools using
QubesDB). At least for client-only VMs (no inter-VM networking enabled).

Anyway, this is surely too late for such change to go into R3.0, queued
for R3.1.

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

@marmarek marmarek added enhancement C: core P: major Priority: major. Between "default" and "critical" in severity. labels Aug 30, 2015
@adrelanos
Copy link
Member

Provides more IP addresses.

What for?

Whonix works also outside of Qubes. [Such as data centers using physically isolated Whonix-Gateway. Maybe one day we also have a physically isolated Qubes-Whonix-Gateway. And no, deprecating all non-Qubes would be a bad strategic move. Would kill a major source of scrutiny.]

Less confusion by "does it conflict with my router".

Anonymous VMs cannot access the home router anyway since everything is routed to Tor.

There might indeed be an issue with non-anonymous VM and non-technical users not realizing why the router is not accessible.

Yes, exactly. And this could be hard to debug for non-technical users.

Constantly calming people, explaining this, to stop the FUD is also a waste of project time.

Leaks even less likely, due to different subnets

What does this mean exactly? Subnets different from which other subnet? How does that impact leaks?

Internal network interface, external network interface. Connecting different subnets doesn't happen by accident. Search term: "connect different subnet"

@marmarek
Copy link
Member

I still think that using network namespaces are unnecessary complex
thing here. Doing double NAT at each ProxyVM/NetVM doesn't also sounds
easy to debug and has probably some performance impact. Maybe we can
simply abandon usage of IP addresses in firewall rules and rely on
source interface name? Then the single MASQUERADE at the end of the
chain would cover all we need.

Such a change could be combined with creating separate chains for each
source VM, so there will a single rule for matching given VM (actually
two of them in case of HVM - as there will be two interfaces: one for
emulated device, and one for PV). This would make firewall much more
readable and also somehow easier to customize
(/rw/config/qubes-firewall-user-script) and somehow better performance
(but probably negligible). I was planning such a change for some time...

But before that we need to think if we really can abandon guarding IP
address. Some possible issues:

  • inter-VM traffic - does the destination VM need reliable source IP
    address? probably yes
  • whonix-gw/TorVM - AFAIR stream isolation between VMs relies on source
    IP; @adrelanos, am I correct?
  • some network analyze tools, traffic inspection etc (tcpdump, netflow,
    etc)

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

@unman
Copy link
Member

unman commented Aug 30, 2015

Maybe we can simply abandon usage of IP addresses in firewall rules and rely on source interface name? Then the single MASQUERADE at the end of the chain would cover all we need.

This assumes that the firewall is only operating at 1 level depth, but there are use cases where this is not so - e.g fw in front of torVM. In that case, to achieve stream isolation you cant use MASQUERADE or interface name at the torVM.

@qubesuser
Copy link
Author

@marmarek I did it like this to preserve compatibility with existing firewall setups, not require custom kernel modules or routing packets to userspace and to have a less weird configuration (the network namespace itself is a bit weird, but you can think it as "embedding" a VM between the ProxyVM/NetVM and the AppVM and then it's just normal NAT routing).

If the firewall design is completely changed, then the single MASQUERADE will indeed work. There is still the issue that DNAT can only be used in prerouting, which means that you need to perform the routing decision with iptables rules in the PREROUTING chain (so that you know what to DNAT to) and then MARK the packet to pass the routing decision to Linux routing, and remove routes based on IP addresses.

The advantage of this scheme is that it's faster and does not require having the 10.137.x.x addresses at all, and allows to either have no internal addresses or have any scheme including using IPv6 addresses. The disadvantage is breaking compatibility and having a very unconventional routing setup.

It might indeed be necessary to patch Tor to isolate based on incoming interface as well as IP though (actually it would be even nicer to be able to embed an "isolation ID" in an IP option so that stream isolation works with chained ProxyVMs and patch everything to use it, not sure if realistic).

@adrelanos

Such as data centers using physically isolated Whonix-Gateway

Having 256 addresses should suffice though, if someone needs more they can probably afford to spend some time renumbering.

Constantly calming people, explaining this, to stop the FUD is also a waste of project time.

I think it should be possible to route packets to an external 192.168.1.1 even if it's the internal gateway, except for the DNS and DHCP ports, and that should be enough for accessing the home router.

It should also be possible on Linux AppVMs to route packets to 192.168.1.128 externally even if it's the address of the local interface (by removing it from the "local" routing table), but that might break some software that doesn't expect that.

Internal network interface, external network interface. Connecting different subnets doesn't happen by accident. Search term: "connect different subnet"

Ah you mean accidentally routing packets to the external network bypassing Tor because it has the same subnet used for private addressing.

That's indeed worrying, but proper firewall/routing should prevent that and it can be an issue regardless of IP choice if the external network happens to have the same IP.

@adrelanos
Copy link
Member

Ah you mean accidentally routing packets to the external network bypassing Tor because it has the same subnet used for private addressing.

That's indeed worrying, but proper firewall/routing should prevent that and it can be an issue regardless of IP choice if the external network happens to have the same IP.

It should. Sure. But sometimes there are obscure bugs and leaks. Unseen by anyone for years. This one is the best example coming to my mind:

That's why I appreciate the extra protection by separate subnets.

inter-VM traffic - does the destination VM need reliable source IP address? probably yes

Yes. And womehow included free ARP spoofing defense preventing impersonating other internal LAN IPs would be a bonus.

  • whonix-gw/TorVM - AFAIR stream isolation between VMs relies on source IP; @adrelanos, am I correct?

Yes. (IsolateClientAddr)

Static IP addresses are very useful for Whonix. Such as for setting up Tor hidden services. And other stuff. But if you can abolish that need with iptables skills, I am all ears.

@marmarek marmarek modified the milestones: Release 4.0, Release 3.1 Feb 8, 2016
@andrewdavidwong andrewdavidwong added the privacy This issue pertains to data or information privacy through technological means. label Apr 7, 2016
andrewdavidwong added a commit that referenced this issue May 31, 2016
@andrewdavidwong
Copy link
Member

For tracking purposes, what is the current status of this issue?

@qubesuser's three pull requests are still open. Does more work need to be done before the PRs can be merged?

marmarek added a commit to marmarek/old-qubes-core-agent-linux that referenced this issue Oct 31, 2016
Keep "main" IP (the one in xenstore) as the one seen by the netvm, and
pass the "fake" one (the one seen by the VM) as script parameter.

Fixes QubesOS/qubes-issues#1143
marmarek added a commit to marmarek/old-qubes-core-agent-linux that referenced this issue Oct 31, 2016
Since 'script' xenstore entry no longer allows passing arguments
(actually this always was a side effect, not intended behaviour), we
need to pass additional parameters some other way. Natural choice for
Qubes-specific script is to use QubesDB.
And since those parameters are passed some other way, it is no longer
necessary to keep it as separate script.

Fixes QubesOS/qubes-issues#1143
marmarek added a commit to marmarek/old-qubes-core-agent-linux that referenced this issue Oct 31, 2016
Even when it's veth pair into network namespace doing NAT.

QubesOS/qubes-issues#1143
marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Oct 31, 2016
This helps hiding VM IP for anonymous VMs (Whonix) even when some
application leak it. VM will know only some fake IP, which should be set
to something as common as possible.
The feature is mostly implemented at (Proxy)VM side using NAT in
separate network namespace. Core here is only passing arguments to it.
It is designed the way that multiple VMs can use the same IP and still
do not interfere with each other. Even more: it is possible to address
each of them (using their "native" IP), even when multiple of them share
the same "fake" IP.

Original approach (marmarek/old-qubes-core-admin#2) used network script
arguments by appending them to script name, but libxl in Xen >= 4.6
fixed that side effect and it isn't possible anymore. So use QubesDB
instead.

From user POV, this adds 3 "features":
 - net/fake-ip - IP address visible in the VM
 - net/fake-gateway - default gateway in the VM
 - net/fake-netmask - network mask
The feature is enabled if net/fake-ip is set (to some IP address) and is
different than VM native IP. All of those "features" can be set on
template, to affect all of VMs.
Firewall rules etc in (Proxy)VM should still be applied to VM "native"
IP.

Fixes QubesOS/qubes-issues#1143
marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Oct 31, 2016
@marmarek
Copy link
Member

Done. See marmarek/qubes-core-admin@2c6c476 message for details. Will need to be documented (based on this ticket and that commit message)

@marmarek
Copy link
Member

@adrelanos this require cooperation from ProxyVM. If ProxyVM do not cooperate (for example outdated package, or no support for this feature - like in MirageOS currently), AppVM may learn its "real" IP (10.137.x.y), or more likely have not working network. I see two options:

  • do not allow setting such not-cooperating netvm when this feature is enabled,
  • allow it, but disable this feature and issue a warning - and enable it back when connected netvm is changed/start supporting it.

Separate issue is how to detect whether such ProxyVM support this feature, but it can be done similar way as Windows tools are detected - VM (template in this case) will expose supported features in QubesDB at startup and it will be recorded as VMs based on this template do support it. BTW the same mechanism can be used to configure Whonix-specific defaults for VM (like enabling this feature automatically).

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Nov 4, 2016
This is the IP known to the domain itself and downstream domains. It may
be a different one than seen be its upstream domain.

Related to QubesOS/qubes-issues#1143`
marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Nov 4, 2016
Set parameters for possibly hiding domain's real IP before attaching
network to it, otherwise we'll have race condition with vif-route-qubes
script.

QubesOS/qubes-issues#1143
marmarek added a commit to marmarek/old-qubes-core-agent-linux that referenced this issue Nov 4, 2016
Even when it's veth pair into network namespace doing NAT.

QubesOS/qubes-issues#1143
marmarek added a commit to marmarek/old-qubes-core-agent-linux that referenced this issue Nov 4, 2016
Core3 no longer reuse netvm own IP for primary DNS. At the same time,
disable dropping traffic to netvm itself because it breaks DNS (as one
of blocked things). This allows VM to learn real netvm IP, but:
 - this mechanism is not intended to avoid detection from already
 compromised VM, only about unintentional leaks
 - this can be prevented using vif-qubes-nat.sh on the netvm itself (so
 it will also have hidden its own IP)

QubesOS/qubes-issues#1143
marmarek added a commit to marmarek/old-qubes-core-agent-linux that referenced this issue Nov 4, 2016
Use /32 inside network namespace too. Otherwise inter-VM traffic is
broken - as all VMs seems to be in a single /24 subnet, but in fact are
not.

QubesOS/qubes-issues#1143
andrewdavidwong added a commit that referenced this issue Nov 6, 2016
@qubesos-bot
Copy link

Automated announcement from builder-github

The package python2-dnf-plugins-qubes-hooks-4.0.0-1.fc24 has been pushed to the r4.0 testing repository for the Fedora fc24 template.
To test this update, please install it with the following command:

sudo yum update --enablerepo=qubes-vm-r4.0-current-testing

Changes included in this update

@qubesos-bot
Copy link

Automated announcement from builder-github

The package python2-dnf-plugins-qubes-hooks-4.0.0-1.fc25 has been pushed to the r4.0 testing repository for the Fedora fc25 template.
To test this update, please install it with the following command:

sudo yum update --enablerepo=qubes-vm-r4.0-current-testing

Changes included in this update

@qubesos-bot
Copy link

Automated announcement from builder-github

The package qubes-core-agent_4.0.0-1+deb8u1 has been pushed to the r4.0 testing repository for the Debian jessie template.
To test this update, first enable the testing repository in /etc/apt/sources.list.d/qubes-*.list by uncommenting the line containing jessie-testing, then use the standard update command:

sudo apt-get update && sudo apt-get dist-upgrade

Changes included in this update

@qubesos-bot
Copy link

Automated announcement from builder-github

The package qubes-core-agent_4.0.0-1+deb9u1 has been pushed to the r4.0 testing repository for the Debian stretch template.
To test this update, first enable the testing repository in /etc/apt/sources.list.d/qubes-*.list by uncommenting the line containing stretch-testing, then use the standard update command:

sudo apt-get update && sudo apt-get dist-upgrade

Changes included in this update

@QubesOS QubesOS deleted a comment from aberja Jun 10, 2017
@marmarek marmarek added the release notes This issue should be mentioned in the release notes. label Jul 31, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: core P: major Priority: major. Between "default" and "critical" in severity. privacy This issue pertains to data or information privacy through technological means. r4.0-fc24-cur-test r4.0-fc25-cur-test r4.0-jessie-cur-test r4.0-stretch-cur-test release notes This issue should be mentioned in the release notes.
Projects
None yet
Development

No branches or pull requests

6 participants