-
Notifications
You must be signed in to change notification settings - Fork 237
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak when a stream in the pipeline gets destroyed unexpectedly #368
Comments
Hmmm, this is indeed strange. Are you using the 'close' event to trigger a specific behaviour and/or perform additional actions on the stream or connection? |
Hi @tomas, No, we are not doing anything else after that. The 'close' event is not used at all in our production, I've only added it for debugging purposes to figure out what was going on with the tcp memory increase. The flow of the actual code is very similar to the sample code I gave you above. |
What options are you using regarding the connection? Also, does this also happen on earlier versions of Node or Needle? |
I am running the sample code above as-is without any changes with I was also able to reproduce it with The problem is also reproducible with |
I see. I'll take a look at this later today. But you certainly shouldn't have to destroy the stream manually. I'm thinking it might have to do with this (if you want to fiddle with the code right away). |
Thanks! I will have a look at the link you provided as well. |
That is meant to handle unexpected errors in the underlying socket. Take a look at this test for an example. Does removing that line fix the issue, by the way? |
Hey @tomas , Sorry for the small delay. So, I run the test again with
So the last two lines on the log above indicate that the socket END event is fired normally and the code block from here does NOT execute, which from what I understand is the correct behavior unless something really unexpected has happened. However, I did force line block to execute (which means that
Nope, I removed the whole block, and does not seem to do any difference.
I also added a |
Hm, I see that you are using a |
Ahh yes, indeed. But why would that trigger the memory increase you're seeing then? |
Oh man...goodbye sleep...I will investigate some more... |
I think I found the memory leak but I want to make 100% sure so I will update you in the morning. I need to re-run everything with a clear head! |
Sure thing! |
Hey @tomas, Sorry for the delay but I was trying to get to the bottom of this. So, the memory leak is real. If the readable stream, for some reason gets destroyed before properly ending, then the following error occurs:
However, the above will not trigger an 'error' event (haven't figured out why) which means that all the streams that were piped, will be left hanging. One solution is to use I did try to see if I can make it work without using var request = protocol.request(request_opts, function(resp) {
resp.on('aborted', function() {
// Did not fire on ERR_STREAM_PREMATURE_CLOSE
});
resp.on('error', function() {
// Did not fire on ERR_STREAM_PREMATURE_CLOSE
});
resp.on('close', function() {
// Did fire so perhaps some error handling can be made here by checking this.destroyed ?
});
.....code
} Based on the documentation here the above events should have fired:
I have also searched to see if anyone else has seen a similar issue and found this and this. Unfortunately, as I said above, the event handlers did not fire so I might be missing something from within the Feel free to do your own tests as well and to check my pull request to see the exact changes I am referring to. Let me know how do you want to handle this so we can try and fix it! |
Wow, you really DID get to the bottom of it! Kudos! I'm a bit busy now but will take a look later. |
Cheers! I will try to figure out why the hell the |
Issue #280 from 2019 is basically the same problem but reproduced in a different way. If a stream in the pipeline gets destroyed before ending properly, a memory leak happens since the file descriptor stays open. You can also reproduce the const needle = require('needle');
const {pipeline} = require('stream');
const {createWriteStream} = require('fs');
var url = 'SOME BIG ASS REMOTE FILE URL';
const readStream = needle.get(url);
const writeStream = createWriteStream(`./out.dat`);
pipeline(readStream, writeStream, (error) => {
if (error) {
console.log(error);
// By uncommenting the next line, the underlying request stream will be destroyed properly.
// Needs to be handled internally.
// readStream.request.destroy();
}
});
setTimeout(function () {
console.log("destroying write stream...");
writeStream.destroy();
}, 2000);
setInterval(function () {
// Keep alive
}, 3000); From what I understand, if you do not wish to use I have also updated the title of this issue because it was misleading. |
Since this is a serious issue and has been affecting us severely I have pushed my branch to production after further testing yesterday and I can confirm that tcp memory usage seems to be very low now, as expected. Looking forward to the officially fixed version ;) |
Hey @alolis, I'm running the "How to reproduce" code you included in the first post, but with the merged PR the |
Hey @tomas! That's normal since I did not change the |
I see. I have another branch where I make sure all pipe destinations also emit the |
If you are asking how to do this with
Is this what you want? |
Nope, not really. Thanks anyway :) |
Hello,
Background
We are using
needle
in one of our projects and we have noticed a slow but steady increase in the tcp memory buffers. The linux command used to view the tcp memory pages allocation iscat /proc/net/sockstat
.Every time we killed the service, the memory consumption dropped significantly so we started digging.
Before kill:
After kill:
After plenty of tests we started noticing that the needle streams we use, do not fire the
close
event. Now, I am not sure where exactly the problem is, but if theclose
event does not fire, it means that the underlying resources have not been destroyed. If they are not getting destroyed than that is consistent with the tcp memory pages increase we have been seeing.How to reproduce
The following sample code shows the problem:
Environment
node.js version: v12.16.1
needle version: v2.8.0
Please let me know if I can help in any other way to get to the bottom of this. Thank you for your time.
The text was updated successfully, but these errors were encountered: