-
Notifications
You must be signed in to change notification settings - Fork 30.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zlib: memory leak with gunzipSync #1479
Comments
It also happens on Windows : node test.js > NUL
#
# Fatal error in ..\..\src\heap\mark-compact.cc, line 2137
# CHECK(success) failed
# |
Can you get some deeper info on it? Maybe run it through valgrind? |
Problem in your code. GC isn't called, because you use only blocking methods. Run "node --expose_gc test.js" 'use strict';
var zlib = require('zlib');
var data = 'abcdefghijklmnopqrstuvwxyz';
var gzipped = zlib.gzipSync(data);
var step = 0;
while (true) {
step++;
var contents = zlib.gunzipSync(gzipped);
process.stdout.write(contents.toString() + '\n');
if (step % 1000) {
gc();
}
} |
it's true. Using |
Yeah of course, I had something like this: glob('**/*.gz', {
cwd: config.data
}, function (err, files) {
files.forEach(treatFile); // about 100k files to unzip and treat
}); Now I am using the async version without any issue. The code was just simpler using gunzipSync... Thanks for your answers :) |
https://github.com/iojs/io.js/blob/v2.0.1/lib/zlib.js#L458 — this is the line that causes it. Looks like even the |
@dchusovitin You are incorrect. Both incremental and full GC runs are automatically called even in synchronous code. Moreover, your code sample doesn't even change anything (with and without fixing the mistype) except for slowing things down. |
I think it's more that everything |
But the reason to defer the event is to prevent infinite recursion. I'm not sure if this is fixable without breaking something else. |
Same bug with |
#5707 should fix this. |
I was running a script to read sequentially a lot of gzipped files, and print on stdout the result of a computation for each file.
After about 16'000 files, the process just stopped and
Killed
was printed on the terminal.I guess that the memory used by the process increases until the kernel decides to kill it.
I could reduce the code to this testcase:
This is what I see with top command just before the process disappears:
> top 15363 mzasso 20 0 10,240g 3,716g 11776 R 111,0 24,2 0:53.59 node
NB: a similar code, that would just write
data
in the while loop keeps a stable memory consumptionThe text was updated successfully, but these errors were encountered: