-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Experimental Features - Security expectations #1299
Comments
The Security WG is (or at least was at one time) focused on ecosystem security. It was never chartered to do a whole lot with Node.js core security. But perhaps that has changed? Regardless, I think the place to document the decision here (whether that would be instead of or in addition to the security-wg repo) would be https://github.com/nodejs/node/blob/main/SECURITY.md and/or https://hackerone.com/nodejs?type=team. |
@nodejs/tsc |
In general, experimental features fall outside our support guarantees. They fall outside of semver rules. They are expected to have bugs and to not be used in production without taking on the associated risks yourself. Yes, the bugs should be fixed, but we should not feel obligated to treat them the same as non-experimental fully supported features. |
I agree with this (for supported versions of Node). |
So if I may clarify, you agree with "any feature should be secure when it's made its way into a relaese", not when it lands on |
We probably should not land code with known security vulnerabilities, but we definitely should not release code with known security vulnerabilities that can be exploited. I thought this thread was more about how we respond to reported security vulnerabilities in supported releases. In that scenario, I think we should handle all security vulnerabilities through the same process, despite feature stability. If an experimental feature has known security issues then it should probably bake a little while longer before being included in a release. |
To be clear, I'm not saying that we should ignore or leave the issues unfixed. That would be silly. I'm saying that we shouldn't feel obligated to treat them the same way and care as vulnerabilities in supported features. |
In other words: We should certainly fix the vulnerabilities, but we don't need to do it in the private repository and we don't need to issue a CVE. Do I understand your position correctly, @jasnell? |
Yes, that is correct |
I am interested in what the @nodejs/security-triage folks think about this. @RafaelGSS and @jasnell have indicated their thoughts here. What about @danielleadams @mhdawson @vdeturckheim and @mcollina? |
I'm thinking there should be some middleground. For example, issue a CVE if the feature is enabled by default (without a specific flag to opt into it). The reason is that when the feature is not gated by a flag, your application may be using it without your knowledge through a third party dependency. |
This relates to some of what we’re discussing in nodejs/node#45084. I think we need to clarify “experimental features” a bit. There are different levels (which maybe we should codify somewhere):
Security exploits don’t care about experimental status. What matters, I think, is whether the vulnerability can be exploited by third-party code (either the app’s dependencies or from outside like via a network call) without the user having opted into the experimental feature. So this would mean that we can offer lesser security guarantees for experimental features that are gated behind build or command-line flags (or that are merged in but unreleased). Once an experimental feature is released and available by default, though, like (Edit: jinx @targos 😄) |
We should never enable by default features with known security vulnerabilities according to our threat model. With the exception of security features (like policies, etc), this could be interpreted also to never land PRs with known security vulnerabilities. This is kind of implicit in the PR review process but we might want to spell it out. Regarding things for which we issues CVE, I think anything that's shipped on a release and does not require a |
So, does the If the feature goal is to increase the security of userland code and because it's behind the |
I would expect vulnerabilities found in features behind a runtime flag (e.g. |
Here is my concern in allowing CVEs for --experimental features: let's imagine we have the new permission system @RafaelGSS is building and we ship it. Almost every bug in that one could potentially be exploited. Therefore we will receive an endless amount of reports, stalling the development of that feature until the next security release. |
What about only accepting Critical CVEs for features behind the flag? We can, potentially, drop the feature as not ready if we get a bunch of reports. |
Even dropping the feature would need to be part of a security release. |
This is really my key concern. In experimental features involving networking protocols or file systems or changes to Node.js' fundamental security model, every bug is a potential security vulnerability. Saying that experimental stuff can't ship without security vulnerabilities means experimental stuff can't ship if there are any bugs. We also open the risk that all development for the experimental feature has to be done in private repos with little visibility from most contributors and the ecosystem. That's dangerous on it's own. There has to be a good middle ground |
@jasnell just expressed one of the main concerns that I was thinking about. That we might not be able to land/experiment with features that are not complete/done if we treat them the same as stable features in terms of CVEs. This is how we currently define Experimental in stability-index Stability: 1 - Experimental. The feature is not subject to semantic versioning rules. Non-backward compatible changes or removal may occur in any future release. Use of the feature is not recommended in production environments. That might be were we add any statements about how CVEs apply to Experimental features. |
@Trott I'm going to need a bit more time to form a more complete opinion, will comment once I've gathered some more info. |
I think from a high level the security report is valid and actionable, so on that side payout and such are ok. on the external side, it doesn't seem like we need a cve or secrecy or anything (i.e. we shouldn't wrap these expectations into how companies are using nodejs), we just need to fix it. |
The problem is not to issue CVES for vulnerabilities in experimental features (we probably should do it). The problem is our 42+ steps security release process. Let’s figure out something more lightweight. My understanding is that V8 can ship security fixes and later disclose the CVE… why couldn’t we? |
I think we already do that. What leads to our current release process is
Do you know how V8 handles those, or treats them differently to make things easier? |
And to clarify a bit more, we only disclose CVEs reported to us after we have shipped security releases. |
Some upstreams don't do immediate patches for some CVEs, particularly those with lower severities. If we defined a patch window for public CVEs based on severity we could probably reduce the number of security releases and fix lower severity CVEs regular releases. We probably would need to have some automation to make sure we did not miss the deadlines but it could work. I'm not sure doing that solves the issue of some experimental features not being ready to have bugs etc. being reported/handled through the more heavyweight H1/CVE process versus regular PRs/Bugs. |
I have no problem in issuing CVEs for experimental features behind a flag. The problem is in our security process and our multiple release lines, which leads us to 4 security release a year at most because or our 90 days disclosure window. My 2 cents is that it's too long of a period for experimental features (that needs feedback by definition). |
I empathise with the concerns here (especially the ones regarding release efforts). I also agree with the 'enabled-by-default' distinctions and the potential of factoring CVE severity into our processes.
One thought specifically on experimental security features like this: what would be the incentive to trial an experimental security feature that came with a warning that any security issues found within it may be fixed in the open or disclosed before a release is available? Perhaps experimental security features like these should be behind a build-time flag until we're confident we can treat any security issues within them as usual and with our typical level of scrutiny. (I know that's likely to be an unpopular suggestion - sorry!). |
The key challenge there is that things that are behind compile time flags don't really get tested to the extent we need them to. We have a hard enough time having things behind runtime flags are tested enough. That's not to say I'm opposed to the idea, just that it places some pretty severe limits |
This is, of course, correct, but we can't expect people to use/test experimental security features and not treat security defects in those features as seriously and sensitively as we treat other security issues. Given that, it's hard to conclude anything other than: We either can't have experimental security features or else we need to treat security defects in those features very seriously--don't disclose them publicly until they've been fixed, etc. If that's all correct, then the logical conclusion is: We need to be unusually cautious about what we choose to make an experimental security feature. For existing experimental security features, that ship has sailed and we have no responsible options other than to treat the security issues seriously or to remove the feature. |
Additionally, I'd say that whatever we decide here, it's provisional. Like, let's try it out for a few months, and if it makes triage impossible, then by all means, let's revise. Or if it causes adoption to become non-existent, again let's revise. My proposal would be:
|
I may be kicking and screaming about it but yeah I think that's the only logical conclusion. |
In terms of the experimental security features I think we will also need carefullly document At least in some cases there are people who believe some of the proposed security features can never be 'fully' secure/not be able to be bypassed but that they can still add some value. I see them in the same sense that seat belts don't promise to save you in all cases but are still a good preventative measure. Maybe well documented limitations along with the assertion that exploiting those limitations is not a vulnerability (or at least a CVE will not be issued) will help. |
What is the best place to include that documentation? I can start working on it. |
I would say either https://github.com/nodejs/node/blob/main/SECURITY.md or https://hackerone.com/nodejs?type=team (or perhaps both). |
Do we want to document how we treat vulnerabilities in experimental code before we figure out if it's sustainable (suggestion was to revisit in May). Once documented I think it might be very hard to backtrack. |
What makes you say that? I would think it would just be a matter of updating the documentation. Am I failing to consider something? |
@Trott once we commit to treating vulnerabilities in experimental features like any other, then its reasonable that people will depend on that (and maybe use experimental features the might not otherwise have used) and complain if we backtrack on that. Maybe the solution is to qualify in that documentation that it is provisional and that we are going to review in May and may change our mind. |
Is this hyperbole? New policy? Reading the security release process, I see a lot of work, but I'd guess the process to take just over a week for well-understood changes. 2021 shows 8 instances of "This is a security release" in 14.x LTS.
I don't think existing documentation makes a distinction. In a sense the documentation already says vulnerabilities are treated the same by saying how "vulnerabilities" (not "stable", "legacy", or "experimental" vulnerabilities) are treated.
The documentation for experimental features discourages using them. "Stable" has a high bar for change, even for SEMVER-MAJOR releases. Perhaps it would be helpful to have a stability index between "stable" and "experimental", something that is subject to semantic versioning but which is relatively free to have breaking changes (including removal) in SEMVER-MAJOR releases. Personally, I'd be more willing to try out experimental (or some new non-stable category) features if they:
This is drifting from the security implications, and I'd be happy to start a new topic if that seems useful. |
@RafaelGSS At the TSC meeting today, there seemed to be consensus (although we only had 9 of 21 members) that we will try to treat security vulnerabilities in experimental features the same way we handle vulnerabilities in stable features. We'll revisit if we find we get overwhelmed with security reports or whatever. Is that basically what you need to close this? I suppose it has to be documented in SECURITY.md and/or HackerOne. @mhdawson had concerns (expressed above) about implying guarantees that aren't there and then taking them away potentially. I'm not sure if he wants to add anything. @tniessen had some comments about certain nuances around communicating the security of experimental features. They agreed to put them here after I leave this comment. |
I think we need to get nodejs/node#45411 merged first. |
Sorted out. Thank you everyone for your comments. |
Sorry @Trott, this kept falling through. My comment boiled down to: the security expectations can be asymmetric; that is, we should accept vulnerability reports for experimental features just like for regular features, but we might still not make the same security guarantees for experimental features that we make for regular features. |
Hello all!
We have had private discussions about handling valid vulnerabilities on experimental features. The TSC would be the best group to decide which point of view we look at for those vulnerability reports.
IMO even being experimental, any feature should be secure when it lands. Therefore, any vulnerability found should be reported and accepted at HackerOne. In this way, we can enforce a collaborative design by awarding security researchers that invest their time when a feature is still experimental and, thus, we guarantee a stable feature when the time comes.
I know @jasnell and others have different opinions, so would be great to hear your thoughts.
Once we have a decision, we probably need to document it properly (likely on nodejs/security-wg#799).
The text was updated successfully, but these errors were encountered: