Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Privacy (PING) review #20

Closed
plehegar opened this issue Nov 17, 2015 · 29 comments
Closed

Privacy (PING) review #20

plehegar opened this issue Nov 17, 2015 · 29 comments
Labels
privacy-tracker Group bringing to attention of Privacy, or tracked by the Privacy Group but not needing response.
Milestone

Comments

@plehegar
Copy link
Member

Review is pending
https://lists.w3.org/Archives/Public/public-privacy/2015OctDec/0135.html

@plehegar plehegar added the privacy-tracker Group bringing to attention of Privacy, or tracked by the Privacy Group but not needing response. label Nov 17, 2015
@plehegar plehegar added this to the V2: worker support milestone Nov 17, 2015
@plehegar
Copy link
Member Author

plehegar commented Dec 4, 2015

@plehegar
Copy link
Member Author

plehegar commented Dec 7, 2015

Unless I hear differently, plan is to close this issue with no action
https://lists.w3.org/Archives/Public/public-privacy/2015OctDec/0151.html

@plehegar
Copy link
Member Author

[[
Action: PING chairs to invite the Web Performance Working Group to join a PING call to discuss privacy considerations for the Time Resolution
]]
https://lists.w3.org/Archives/Public/public-privacy/2015OctDec/0152.html

@plehegar
Copy link
Member Author

No further action needed.
https://www.w3.org/2016/01/21-privacy-minutes.html#item02

@plehegar
Copy link
Member Author

Re-pinged them for safety:
https://lists.w3.org/Archives/Public/public-privacy/2018OctDec/0037.html
Feel free to close again at the end of January unless there is new information.

@plehegar plehegar reopened this Dec 12, 2018
@samuelweiler samuelweiler changed the title PLING review Privacy (PING) review Mar 7, 2019
@samuelweiler
Copy link
Member

samuelweiler commented Mar 7, 2019

PING's review identified potential 1st/3rd party issues. Here is a summary of the PING discussion (with full minutes linked): https://lists.w3.org/Archives/Public/public-privacy/2019JanMar/0016.html Tagging @jasonnovak, @snyderp, and @sandandsnow in case they want to open a specific issue,

@pes10k
Copy link

pes10k commented Mar 10, 2019

@samuelweiler thanks for this!

Added a issue / suggestion that I hope could be a nice privacy / usefulness tradeoff in #64

@yoavweiss
Copy link
Contributor

As discussed on the call today, the review seems done, and I'm therefore closing this issue. Please let me know if there are major objections.

@pes10k
Copy link

pes10k commented Apr 23, 2019

Yes, objection. This is still open and unaddressed (from the previous comment above)

#64

@yoavweiss
Copy link
Contributor

I believe #64 is addressed with 3 major browser implementations deeming it something they will not implement. (and a 4th browser representative calling it a "hugely breaking change")

Firefox representatives asked to keep that issue open for a few more days, which is why I haven't closed it yet.

Any other concerns resulting from the review that require keeping this issue open?

@pes10k
Copy link

pes10k commented Apr 23, 2019

W3C is a member organization beyond 3 (or 4) browser implementations. Browser implementations saying they will not implement is not, in and of itself, an appropriate reason to close an issue.

The reason to not close this issue is because this issue that came out of the PING review (i.e. the above) and has not been dealt with.

Is there evidence to support This would be a hugely breaking change to the web today.? Have other efforts been explored to keep this functionality out of the common access by all script, and only allow code that could use it for a useful purpose to use it? E.g. what efforts have been made to make this functionality as privacy protecting as possible?

@yoavweiss
Copy link
Contributor

Let's discuss #64 on that issue. Are there any other issues that require to keep this one open?

@pes10k
Copy link

pes10k commented Apr 23, 2019

I'm not in control of this issue, I can't stop you from closing it, but I still object to closing it. Closing it would seem to suggest that the issues in the "Privacy (PING) review" have been dealt with. That is not the case.

That being said, if the issue is closed anyway, I'll leave this comment here to note that the issues in the PING review have not been dealt with.

@yoavweiss
Copy link
Contributor

I take that to mean there are no other issues. However, if it's of importance, I'm happy to keep this issue open until the other one is closed as well.

@yoavweiss
Copy link
Contributor

Closing, as #64 was closed due to lack of implementer interest.

@igrigorik
Copy link
Member

For public record..

This conversation continued in other channels. We had a VC discussion on 06/20 with @plehegar @swickr @snyderp @yoavweiss @toddreifsteck (plus a few others whose github IDs I can't find atm — apologies), plus a followup email thread with the same list of participants.

I extracted my summary from the email exchange into a public doc (github is not being formatting friendly to my wall of text): https://docs.google.com/document/d/1brmO-lzN11BUoDOIaZZrg3uSjdvy7hEv915-ARSGJQk/edit

@samuelweiler
Copy link
Member

@igrigorik Thank you for the summary. I would like to hear more re: the technical arguments against the gating proposal. I see where you wrote "did not get traction amongst implementers", which is similar to the arguments I've heard from Yoav against the proposed mitigation. I would like to be convinced of the technical merit of that position.

@yoavweiss
Copy link
Contributor

Technically we should continue this in #64, but I can spell out why the gating proposal doesn't make much sense here:

  • It's not clear that removing this API from contexts in which it's currently shipped for the last ~6 years is web compatible.
  • The API is not promise-based so we cannot present a permission dialog to users when it is called.
  • There's not reason to think the permission suggested (e.g. full-screen) are particularly tied to the use-cases that the API tackles.
  • Assuming there's a real privacy risk in exposing this API, there's no reason to believe that pages that request those permissions are somewhat safer to enable high-precision timers in. e.g. Nothing prevents a full-screen page from being malicious.
  • There's no way to present a permission prompt to users that will clarify to them that on top of whatever permission they enable, they are also enabling high-precision timers with all the caveats that come along with them.

Therefore, browsers have made a decision to downscale the precision of the API until the root causes of the vulnerabilities it exposes are being tackled.

With all the above said, there may be other mitigations that may make sense to apply in the meantime:

  • Browsers could vary the precision of the API based on the presence of resources from known trackers.
  • Browsers could vary the precision of the API based on abuse signal detection.

Those kind of mitigations are compliant with the current spec language. At the same time, it doesn't make sense to enforce them as they may be heuristic in nature, and may not make sense in some cases (e.g. on CPUs that don't suffer from spectre due to lack of speculative execution and don't have inclusive caches, that exceed trust boundaries).

Does the above answers your questions regarding the technical merit of the position we took?

@igrigorik
Copy link
Member

@samuelweiler I think @yoavweiss covered all the salient points from previous discussions. For the comment you quoted I was specifically referring to discussion in #64 (comment).

@samuelweiler
Copy link
Member

Thank you for these answers. I appreciate them.

  • It's not clear that removing this API from contexts in which it's currently shipped for the last ~6 years is web compatible.

I'm willing to let compatibility suffer here.

  • The API is not promise-based so we cannot present a permission dialog to users when it is called.

Could we not change that? If not, why not?

  • There's not reason to think the permission suggested (e.g. full-screen) are particularly tied to the use-cases that the API tackles.

I'm willing to hear more about this from @snyderp

  • Assuming there's a real privacy risk in exposing this API, there's no reason to believe that pages that request those permissions are somewhat safer to enable high-precision timers in. e.g. Nothing prevents a full-screen page from being malicious.
  • There's no way to present a permission prompt to users that will clarify to them that on top of whatever permission they enable, they are also enabling high-precision timers with all the caveats that come along with them.

Therefore, browsers have made a decision to downscale the precision of the API until the root causes of the vulnerabilities it exposes are being tackled.

Given that, I'm wondering if we should have this API at all. Is this just too dangerous?

With all the above said, there may be other mitigations that may make sense to apply in the meantime:

  • Browsers could vary the precision of the API based on the presence of resources from known trackers.
  • Browsers could vary the precision of the API based on abuse signal detection.

Those kind of mitigations are compliant with the current spec language. At the same time, it doesn't make sense to enforce them as they may be heuristic in nature, and may not make sense in some cases (e.g. on CPUs that don't suffer from spectre due to lack of speculative execution and don't have inclusive caches, that exceed trust boundaries).

Does the above answers your questions regarding the technical merit of the position we took?

For me it does. I'm not convinced, but it does answer my questions. :-) Thank you.

@yoavweiss
Copy link
Contributor

yoavweiss commented Jul 16, 2019

  • It's not clear that removing this API from contexts in which it's currently shipped for the last ~6 years is web compatible.

I'm willing to let compatibility suffer here.

OK. You could try to convince an implementation to try that out and see what the implications of that decision are.

  • The API is not promise-based so we cannot present a permission dialog to users when it is called.

Could we not change that? If not, why not?

We cannot, due to web compatibility.

  • Assuming there's a real privacy risk in exposing this API, there's no reason to believe that pages that request those permissions are somewhat safer to enable high-precision timers in. e.g. Nothing prevents a full-screen page from being malicious.
  • There's no way to present a permission prompt to users that will clarify to them that on top of whatever permission they enable, they are also enabling high-precision timers with all the caveats that come along with them.

Therefore, browsers have made a decision to downscale the precision of the API until the root causes of the vulnerabilities it exposes are being tackled.

Given that, I'm wondering if we should have this API at all. Is this just too dangerous?

Is there any specific vulnerability you have in mind that renders this API "too dangerous"?

The reasons for downscaling it in most browsers were Spectre related (where the root-cause is being tackled by other means) as well as the cache inspection attacks described at #79 (comment) (third attack described there). Regarding the latter, the attack's ability to be amplified was limited, which meant that changing the timer's granularity to a minimum of 5 microseconds was enough to mitigate it.

@rniwa
Copy link

rniwa commented Jul 16, 2019

  • It's not clear that removing this API from contexts in which it's currently shipped for the last ~6 years is web compatible.

I'm willing to let compatibility suffer here.

That is not okay.

@pes10k
Copy link

pes10k commented Jul 16, 2019

Trying to tie together conversations from above and #79

Re permissions / promised based

A web-compat solution would be to return the Date.now() aliased / millisecond value initially, return the high-res, micro-second version post-permission. Similarly, as per the original discussion in #64, the same could be done to return different values in 3p origins, script from 3p, only after a user gesture, etc.

Re too dangerous / risk / precaution / etc and Re: API can't be changed b/c of breaking existing code

This is a core point in PING's concern (and something that came up several times on our call). Once this functionality is out, and all sites expect to access it in all contexts, its too late to make changes. The best strategy is to be as conservative, and cautious, as possible in revealing new functionality. Especially functionality like HRT that have a long, cross-platform history of privacy risk (as acknowledged by the standards text, group members etc.)

Process cont.

I take your point @yoavweiss, that some (all?) of the attacks mentioned in the previous comment may be mitigated by some mix of vendor-specific implementation choices, upcoming vendor-implementation plans, still-being-worked-on-standards, etc. This may also be true for the other attacks shared previously too (#64 (comment), etc). But I think it's just untenable to judge a standard by vendor-specific implementation choices, upcoming vendor-implementation plans, still-being-worked-on-standards, etc; the proper basis for evaluating a standard is relative to existing, agreed to standards.

Grand compromise / way forward ? (?)

coming together hope one: I think a big part of this disagreement / clash is that PING sees a risky feature, implicated in a long history of attacks, with discussed but not yet standardized possible solutions, with (it seems) limited use cases. Seems like the freight not being worth the cost. PING therefor wants to see either limited availability, or the standard require the less-risky millisecond level resolution. Pointing out a long long long list of previous known timing risks that have been mitigated only urges more caution when committing the web platform to new timing related features, not less.

The upside / uses mentioned in the discussion (e.g. syncing WebVR, full screen games, etc) all seem well covered by existing permissions. It would be very helpful to know if there are other, common use cases PING isn't aware of, that require global availability to be useful. That might be valuable to decrease the temp in the room and get us communicating more collaboratively.

coming together hope two: Several times its been mentioned that HRT isn't the (sole?) core of the standard, including by @igrigorik (#79 (comment)) and @yoavweiss (#64 (comment)), and that the other (main?) benefits are things like a worker accessible, monotonically increasing values, etc. What if the standard was instead reworked to jettison the HRT, and instead just provided the above, relatively uncontroversial functionality. Thoughts?

@rniwa
Copy link

rniwa commented Jul 16, 2019

The upside / uses mentioned in the discussion (e.g. syncing WebVR, full screen games, etc) all seem well covered by existing permissions.

It's not okay to expose high resolution time in full screen games if it poses any security or privacy risk. Ordinary user simply wouldn't understand that using a website / web app in full screen would have a privacy / security implication.

@hober
Copy link
Member

hober commented Jul 16, 2019

@snyderp wrote:

Pointing out a long long long list of previous known timing risks that have been mitigated only urges more caution when committing the web platform to new timing related features, not less.

I suspect the key word here is new. Isn't the feature in question widely-deployed (in all major engines) and hasn't it been in the spec for years and years?

(Apologies if I got the details wrong; I'm just passing by.)

@yoavweiss
Copy link
Contributor

Re permissions / promised based

A web-compat solution would be to return the Date.now() aliased / millisecond value initially, return the high-res, micro-second version post-permission. Similarly, as per the original discussion in #64, the same could be done to return different values in 3p origins, script from 3p, only after a user gesture, etc.

Implementations are allowed to do that now with the current spec language. I think we disagree on the viability of any of these measures in actually thwarting malicious uses of the API and providing safer experiences for users in a universal way that will require baking in these measures into the standard.

Re too dangerous / risk / precaution / etc and Re: API can't be changed b/c of breaking existing code

This is a core point in PING's concern (and something that came up several times on our call). Once this functionality is out, and all sites expect to access it in all contexts, its too late to make changes.
The best strategy is to be as conservative, and cautious, as possible in revealing new functionality.

As pointed out by @hober, that ship has long sailed. This is far from being new functionality, and has been shipping in all major browsers for the last ~6 years (IIRC).

I take your point @yoavweiss, that some (all?) of the attacks mentioned in the previous comment may be mitigated by some mix of vendor-specific implementation choices, upcoming vendor-implementation plans, still-being-worked-on-standards, etc. This may also be true for the other attacks shared previously too (#64 (comment), etc). But I think it's just untenable to judge a standard by vendor-specific implementation choices, upcoming vendor-implementation plans, still-being-worked-on-standards, etc; the proper basis for evaluating a standard is relative to existing, agreed to standards.

My point is that HR time is the symptom in those attacks, not the root cause. Clamping it to millisecond levels by default as part of the standard is equivalent to forcing the entire population to take morphine on a daily basis forever because some people are currently ill and are being treated for it.

Pointing out a long long long list of previous known timing risks that have been mitigated

Correct me if I'm wrong, but I believe the "long long list" consists of:

  1. Spectre - A major event as far as vulnerabilities go, which impacted shipping implementations of HRT, SharedArrayBuffers, and other implicit timers. The solution to which is not to pretend implicit timers don't exist, but to enable modes in which cross-origin information never makes it into the renderer process, and in that scenario enable higher-resolution timers, both explicit and implicit.
  2. Cache attacks - As discussed in Security and privacy considerations for DOMHighResTimeStamp resolution #79 (comment), this attack resulted in mandatory, normative clamping of HRT to 5 microseconds.

Other timing related vulnerabilities discussed on the different threads don't seem to require high resolution timers at all, as they can be amplified to work well with 1ms or even 16ms timers.

A common thread to the two vulnerabilities mentioned above is that they are only impacting a subset of our users' HW architectures. One could imagine implementations that e.g. provide higher-resolution timers to AMD processors (which don't have inclusive caches) or to some ARM processors (which don't have speculative execution). Baking in mitigations into normative spec language would not enable that kind of flexibility.

The upside / uses mentioned in the discussion (e.g. syncing WebVR, full screen games, etc) all seem well covered by existing permissions.

Off the top of my head, use cases that are not covered by existing permissions:

  • Animation - web sites like to animate their experiences to users, even if they are not full screen games. In order to make sure those animations are not janky, they need to be able to measure the time operations take them to know how much time they have left to complete the animation. See this article from May 2012 for more details.
  • Performance measurements - web sites need to be able to measure their current performance in order to improve it, both for loading as well as runtime performance.

@tildelowengrimm
Copy link

Implementations are allowed to do that now with the current spec language.

Specs should define safe and privacy-preserving implementations in at least as much detail as implementations which don't account for safety & privacy. If we think that the described behavior is an appropriate mitigation, it should be specified, not merely permitted.

@yoavweiss
Copy link
Contributor

Implementations are allowed to do that now with the current spec language.

Specs should define safe and privacy-preserving implementations in at least as much detail as implementations which don't account for safety & privacy. If we think that the described behavior is an appropriate mitigation, it should be specified, not merely permitted.

I understand the above is your opinion, but there's no reason to enshrine today's workarounds once we've resolved the root causes behind the vulnerabilities.

Today's mitigations may not be necessary tomorrow (due to other spec efforts, HW fixes, OS fixes, etc), may not be necessary for some users (as the vulnerabilities are mostly HW based) and can significantly vary based on the different implementation's architectures (e.g. Site Isolation, Origin Isolation, etc).

@tildelowengrimm
Copy link

tildelowengrimm commented Jul 18, 2019

Please understand that it isn't just my opinion. It's the opinion of PING, the group responsible for ensuring positive privacy outcomes in web standards.

PING as a whole expects specification authors to specifically define the safe and privacy-preserving behavior which should be available to all sites, and to gate more powerful behavior which can enable tracking behind permissions which ensure consent for that capability.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
privacy-tracker Group bringing to attention of Privacy, or tracked by the Privacy Group but not needing response.
Projects
None yet
Development

No branches or pull requests

8 participants