-
Notifications
You must be signed in to change notification settings - Fork 30.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segmentation fault in http2 module when cancelling download in Chrome #18973
Comments
It seems I'm hitting the same issue when trying an HTTP2 push when Chrome already closed the connection. gdb spits out a similar stack trace:
I managed to reduce the code to following example: const fs = require("fs");
const http2 = require("http2");
const options = {
key: fs.readFileSync(`${__dirname}/localhost.key`),
cert: fs.readFileSync(`${__dirname}/localhost.crt`)
};
const server = http2.createSecureServer(options, async (req, res) => {
if (req.url !== "/") {
res.writeHead(404);
res.end();
return;
}
// no crash when pushing a single response
push(req, res, "/dummy1");
push(req, res, "/dummy2");
res.writeHead(200);
res.write("test");
res.end();
});
server.listen(8443);
console.log("Listening");
function push(req, res, path) {
res.createPushResponse({":path": path}, (err, pushResponse) => {
if (err)
return console.error({err}, "createPushResponse failed");
pushResponse.stream.on("error", (err) => {
console.warn("Writing push failed", err);
pushResponse.stream.end();
});
// no crash when immediately sending a response and closing the stream
const readStream = fs.createReadStream(__filename);
readStream.on("end", () => pushResponse.stream.end());
pushResponse.writeHead(200, {});
readStream.pipe(pushResponse.stream);
});
} Steps:
|
I couldn’t reproduce the segfault with either example yet. :/ If it is at all possible, can you try to reproduce this with Node-only code? I.e. a standalone script that crashes and doesn’t require a browser or other external HTTP/2 clients. It would be appreciated if you could provide a core dump, or try to dig into this with a debugger if you can. Building a debug build of Node takes a bit, but it should be worth it. If your local reproduction is reliable enough, you could probably also figure out something meaningful by attaching valgrind to the Node process. What I can tell from the offsets in the stack traces is that most likely one of the
@iczero Does that mean that that commit fixes the issue for you, or that you didn’t try anything that includes that patch? It would be weird if that commit fixed things, the original PR that introduce that bug didn’t make it into Node 9. |
@addaleax I meant that the problem still existed with that commit. Sorry for the confusion. |
@iczero @michael42 Could you check whether the patch from #18987 fixes this issue for you? That would be a huge help. |
I think I can check it tomorrow. For now, I can't get it reproduced without Chrome, but here's the output from chrome://net-internals:
I also ran it under |
@addaleax It doesn't seem to fix the problem. :(
I'm going to try running node in gdb to see if I can isolate the problem. |
@michael42 Thanks for reporting back! Yes, it’s expected that there are some lost bytes since Node doesn’t fully clean up before process exit. It would be more interesting to know about any kind of uninitialized data usage. @iczero Thanks for the info. :/ Do you think you could try a debug build of Node to possibly provide more context? |
A better backtrace provided by the debug version:
|
@iczero Can you figure out why that is crashing? Is |
Seems like it:
|
@iczero Can you build with --- a/src/node_http2.cc
+++ b/src/node_http2.cc
@@ -1755,2 +1755,3 @@ Http2Stream::Http2Stream(
Http2Stream::~Http2Stream() {
+ DEBUG_HTTP2STREAM2(this, "tearing down stream");
if (session_ != nullptr) {
|
The compile errored with that patch:
|
Weird, my compiler didn’t complain. :o Anyway, using |
The output now:
Also, I am on gcc |
Thanks, that seems helpful.
The hypothesis I’d get from the above is that the Can you provide a backtrace for the You could also try removing the I’m going to sleep now, but thanks a lot for sticking with me in debugging this. 💙 If somebody else wants to think about the ideal patch here, they’re welcome to do that, because it’s not quite obvious to me at this point what the best approach would be. |
Thanks for your time. Good night! |
Backtrace to
|
This fixes a crash that occurred when a `Http2Stream` write is completed after it is already destroyed. Instead, don’t destroy the stream in that case and wait for GC to take over. Fixes: nodejs#18973
Yep it works. Thanks! |
Let’s keep this open until the PR gets merged. |
Oh, sorry about that. |
This fixes a crash that occurred when a `Http2Stream` write is completed after it is already destroyed. Instead, don’t destroy the stream in that case and wait for GC to take over. Backport-PR-URL: #19185 PR-URL: #19002 Fixes: #18973 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Anatoli Papirovski <apapirovski@mac.com>
This fixes a crash that occurred when a `Http2Stream` write is completed after it is already destroyed. Instead, don’t destroy the stream in that case and wait for GC to take over. Backport-PR-URL: nodejs#19185 PR-URL: nodejs#19002 Fixes: nodejs#18973 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Anatoli Papirovski <apapirovski@mac.com>
This fixes a crash that occurred when a `Http2Stream` write is completed after it is already destroyed. Instead, don’t destroy the stream in that case and wait for GC to take over. Backport-PR-URL: #19185 Backport-PR-URL: #20456 PR-URL: #19002 Fixes: #18973 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Anatoli Papirovski <apapirovski@mac.com>
This fixes a crash that occurred when a `Http2Stream` write is completed after it is already destroyed. Instead, don’t destroy the stream in that case and wait for GC to take over. PR-URL: nodejs#19002 Fixes: nodejs#18973 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Anatoli Papirovski <apapirovski@mac.com>
This might otherwise result in a hard crash when trying to write to a socket after a sudden disconnect. Note that the test here uses an aborted `h2load` run to create the failing requests; That’s far from ideal, but it provides a reasonably reliably reproduction at this point. PR-URL: nodejs#18987 Fixes: nodejs#18973 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
This fixes a crash that occurred when a `Http2Stream` write is completed after it is already destroyed. Instead, don’t destroy the stream in that case and wait for GC to take over. Backport-PR-URL: #19185 Backport-PR-URL: #20456 PR-URL: #19002 Fixes: #18973 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Anatoli Papirovski <apapirovski@mac.com>
This fixes a crash that occurred when a `Http2Stream` write is completed after it is already destroyed. Instead, don’t destroy the stream in that case and wait for GC to take over. Backport-PR-URL: #19185 Backport-PR-URL: #20456 PR-URL: #19002 Fixes: #18973 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Anatoli Papirovski <apapirovski@mac.com>
This might otherwise result in a hard crash when trying to write to a socket after a sudden disconnect. Note that the test here uses an aborted `h2load` run to create the failing requests; That’s far from ideal, but it provides a reasonably reliably reproduction at this point. PR-URL: nodejs#18987 Fixes: nodejs#18973 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
This might otherwise result in a hard crash when trying to write to a socket after a sudden disconnect. Note that the test here uses an aborted `h2load` run to create the failing requests; That’s far from ideal, but it provides a reasonably reliably reproduction at this point. Backport-PR-URL: #22924 PR-URL: #18987 Fixes: #18973 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
@sam-github Not according to our CVE management repository. This was also during the time when HTTP/2 support was classified as experimental, I don’t know if we assign CVEs for those cases? |
I don't know either! :-) On the one hand... "experimental" - use at your own risk, if it breaks, you get to keep the parts. On the other hand, we shipped it, people are using it, and their servers can be crashed, maybe still can be crashed if #24037 is accurate. A hard crash might deserve a CVE. @nodejs/security @nodejs/security-wg Anyone have some thoughts on what our policy should be here? Should TSC decide that experimental features accept bug reports, but not vulnerability reports? Or decide that if people are using it they deserve to know about sec problems? |
I think that experimental feature are “use at your own risk”. |
I agree, this should not need to be fixed in 8.12. We explicitly invoke node with Unfortunately, there seems to be a problem with Node 10,
when connecting (from a NodeJS 8.12 (websocket) client) to the updated Node 10 (websocket) server. The server is created using |
/cc @nodejs/crypto for the comment above? |
If you provide standalone scripts that demo this 8.x to 10.x interop problem I will try to debug it. |
@sam-github, I've created a minimal sample repo (
Try running the latest commit (Node8<->Node10) and the pre-latest one (Node8<->Node8). |
I think our definitions of standalone differ! I'll be wanting to wireshark the communication, and I don't want to play with docker. I'll see if I can run the client.js/server.js from the gist directly on my Linux laptop. |
Got it. This was a ‘simple’ way for me to not have dependencies on which OpenSSL is installed and recreate our exact case (our node server runs in Docker too). Hopefully the scripts inside produce the same result with your system. |
I can reproduce it locally using this wireshark filter sh make-certs.sh
(nvm use 10.13 && PORT=44330 node server.js) &
(nvm use 8.12 && PORT=44330 node client.js) |
I have no experience with Wireshark, but I definitely see something different going on in the Server Hello message. The Node 8 server sends a Server Hello of 83 bytes, while the Node 10 server sends one of 61 bytes. The difference is the After that Hello, the Node 8 and Node 10 servers seem to proceed similarly for a few packets, except that the Node 10 terminates early. With my limited wireshark knowledge I don't know if the following is interesting: in the Node 10 capture Wireshark detects a HTTP switching protocols to Websockets, while in Node 8 Wireshark just marks it as "Application Data". |
What's happening is uws, for node 10, is dumping the ws/HTTP "switching protocols" message directly onto the TCP connection, unencrypted! The TLS client then attempts to decode it as a TLS application data record, but since its just unencypted HTTP, it fails to decode, and the client errors out. Now, why does uws do the wrong thing here, I'm not sure. That will take more digging. I'll take a bit more time to look at this to see if I can tell why its doing that, it could be something changed in node that should not have. Since uws is dead upstream, doesn't support node 11, and has been spectacularly abandoned by its maintainer, I'm not sure how much time I'll spend on this. |
Sorry, no clue. This disappears into uws' cpp internals, I'm not honestly sure why it ever works with TLS, uws seems to replace all the node TLS code with its own. Its possible it just doesn't support openssl 1.1.0 (8.x used 1.0.2). Btw, this doesn't depend on the client node version, its just the server. Also, I don't think this issue has anything to do with the one whose thread we are using, I think we hijacked it. |
True, this has nothing to do with this issue. For your interest: I found that UWS relies on |
@hermanbanken Fyi, there’s some context in #21711. It’s temporarily added back, but please take this up with the maintainers of UWS, because this is not a permanent solution (if it is unmaintained, you might have to look into finding new people to maintain it :/). |
Eureka, I also just found #21711. We're aware of the - unfortunate - unmaintainment. We do not have a good replacement ATM. Looking into the @discord/uws fork, and also hoping for a patch in Node 8 HTTP2 (unrelated bug) so we do not need to upgrade ATM. |
Steps to reproduce:
http2
moduleI have not been able to reproduce this problem locally, however.
This was the stacktrace which was generated by node-segfault-handler (Node.js v9.5.0):
Relevant symbols demangled:
A small script that causes the segfault:
The text was updated successfully, but these errors were encountered: