-
Notifications
You must be signed in to change notification settings - Fork 459
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any hints at tracking memory usage/leaks in node 12 when using node-addon-api? #498
Comments
I don't know if you have taken this step yet, but I would first expose node's garbage collection, then call it at some interval to verify if the memory usage continues to grow.
|
@kevinGodell - I would never have thought of that - have you known cases where garbage collection fails to run on its own? In any case, I implemented your suggestion, calling global.gc on a 60 second interval. The data loss is practically the same (this is a 3 hour window vs. the previous graph being 6 hour with only a portion using node 12). |
I have a situation where I am creating a napi buffer from a uint8 array and the memory usage grows very high in nodejs before garbage collection kicks in and cleans up. If I run gc() often, then the memory usage stays low. I have not solved it yet. |
If you can narrow down to a small enough test case (for example maybe running the unit tests for your module) and then running under valgrind that can help identify native memory leaks. It might also be useful to use the nightly builds for 12.x: https://nodejs.org/download/nightly/ to see if the behavior changed along the way as that would help narrow down what changes in Node.js could be related. |
@kevinGodell for your issue you might try using // Memory management To let the gc know about the native memory you are using. This is intended to give the gc more info about native memory usage so it knows when is might night to gc. You would need to make sure you adjust down as well when the buffers are freed. |
tl;dr - the big jump in memory usage is between v12 nightlies 20190201 and 20190202. my best guess is that it's due to details currently i have not been able to reproduce this running on my local machine; i see the problem consistently when testing in docker containers - both debian stretch and alpine 3.9.4. there are a lot of changes there; i doubt that anything stands out for you. my application doesn't really use anything advanced, and it's possible that i'm using things in a way they were intended. the application is appoptics-bindings and primarily uses a couple classes and a number of namespaced functions. no threads, no callbacks, etc. if anything stands out to you @mhdawson let me know. in the meantime, i'll continue trying to get a reproducible case locally (odd it only happens in a docker container) and taking a look through the changes from 2019-02-01 and 2019-02-02. i've appended "- x" to the changes that look most relevant to the dramatic increase in memory.
|
Graphical representation of the data: The read/green lines at the bottom are the ones of interest. The blue/purple lines are running node 12.4 on alpine because I wasn't able to use node-nightly to load/run specific nightlies on alpine. green line -
as you can see the 0202 nightly spikes. it looks like there is a slow leak independent of the spike but that doesn't appear to have anything to do with node. |
This is the list of commits between those 2: -sh-4.2$ git log --pretty=oneline 7c9fba30ef...0f8e8f7c6b
0f8e8f7c6b9e7a8bdae53c831f37b2034d1c9fa7 tls: introduce client 'session' event
e1aa9438ead2093a536e5981da7097c9196e7113 tools: add test-all-suites to Makefile
5e0a3261f0e61b50f03e65e0134d4a01475c0cf1 tools: make test.py Python 3 compatible
dee9a61bb97bce0c75afcaffb1ae7fec81020ac4 src: remove unused AsyncResource constructor in node.h
8c8144e51ce3ae2a521668b45d5afb21be8564e3 src: remove unused method in js_stream.h
fa5e09753055edfe0e9a0f700dfcb4f356ac3c9d process: move DEP0062 (node --debug) to end-of-life
154efc9bdef3ba8df5d3dfe3b32102baf3ac4311 process: exit on --debug and --debug-brk after option parsing
c369b3e9297d11f1b761409ce7b429c5de9dcb92 test: exclude additional test for coverage
406329de577d9604e8b1dce9c5a5161e8cb33e25 process: move process mutation into bootstrap/node.js
c2359bdad62b83d40976d91e91097684c23a7ae3 process: expose process.features.inspector
39d922123c02aecc2e289a08e3bdb9515a7b193a lib: save primordials during bootstrap and use it in builtins
1d996f58af3067617a67c0af8f86f014ed4d139c src: properly configure default heap limits
d0d84b009ce4f2fe274568ed39754c02167e27d3 report: separate release metadata
393c19660510f3cd1ac3f9445747ec4c32ec224f worker: refactor thread id management
bcf2886a84407028572fd1084242a1c789c056f8 http: return HTTP 431 on HPE_HEADER_OVERFLOW error
a861adde3bc22dec07e67f199be5f2c2aa226b44 test: allow coverage threshold to be enforced
0ff0af534ef150820ac218b6ef3614dc199de823 worker: throw for duplicates in transfer list
7c8ac5a01b4ba5d4c7060875ea024e6efbc12893 deps: cherry-pick c736883 from upstream V8
f4510c4148b50b47ac22fdb5331ce726b63b8525 test, tools: suppress addon function cast warnings
80873ec3c2e18c151ddf1c0d79461c48d367206f crypto: fix public key encoding name in comment |
This one:
would affect the default limits set to be used by the gc and could be affected by where you run (you mention issue is only seen under docker). You might try setting the max sizes with --max-old-space-size and see if that has any effect. Nothing else really stands out and nothing looks related to N-API. |
thanks for wading through this @mhdawson - you got to this before i added the tl;dr. looks like my problem from here on out. the large bump associated with the default heap limits seems to have been a red herring. |
@bmacnaughton I assume this issue can be closed. If so can you just confirm if the problem was related to the change in the default heap limits or something else? |
The issue I was looking for help on was the large jump in memory usage which was caused by the change in default heap limits. It can be closed - thanks. If I find something else I'll enter a new issue that isn't conflated with new heap allocations. |
@bmacnaughton thanks for the confirmation. Closing |
@mhdawson - i know you have a million things going on but, for closure (and the wild hope that you'll say "ah, i know what it is"), here's my follow-on posting about the memory leak. |
@bmacnaughton I had taken a quick read of that one. Sorry I don't have an "aha". |
Our application uses the node-addon-api to interface to and abstract functions in an external
.so
I am looking for some hints as to how track down memory leaks, or what additional information I could provide, in order to isolate why, for node 12 only, our application uses significantly more memory and leaks memory as well.
Here's a graph to get a rough idea of what's going on. The total X-axis represents 6 hours.
The left hand section is running node 8, the middle section node 10, and the right hand section node 12. The orange line at the bottom is the application running without our agent running in the application (so no
.so
file, wrappers, or agent code). The application is a test-bed todo application using express (derived from the todomvc-mongodb package).I know this isn't enough information to solve the problem - I'm just looking for some pointers and steps to take.
The text was updated successfully, but these errors were encountered: