-
Notifications
You must be signed in to change notification settings - Fork 384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Heap autosize #82
Heap autosize #82
Conversation
Hey CAFxX! Thanks for submitting this pull request! I'm here to inform the recipients of the pull request that you and the commit authors have already signed the CLA. |
We have created an issue in Pivotal Tracker to manage this: https://www.pivotaltracker.com/story/show/143873083 The labels on this github issue will be updated when the story is started. |
Also expose $NPM_CLI_OPTIONS for customizing npm
As a side note: the added integration tests are passing but, at least on my machine, some unrelated tests are failing. |
We had two different teams run into problems because of no autotuning in the last month alone (and apparently we aren't the only ones, see e.g. nodejs/node#2738). Also, the current behavior is inconsistent with what the java buildpack does out-of-the-box. |
Pinging @adrai, @afeld and @flavorjones because they were discussing something similar in #34. |
Nice.... exactly what I wanted since 2015 |
@flavorjones you seemed reluctant to make this change in the past. I'm not too worried about it, especially given Heroku's suggestions. Anyone opposed to a quick timeboxed investigation, and then setting |
Looks good to me! |
Prioritized an investigative story here: https://www.pivotaltracker.com/story/show/144219147 |
@CAFxX Having played with these settings for a while, I am convinced by "--gc_interval=100" "--optimize_for_size" "--always_compact" for low memory settings (although feel that low memory is app specific, and thus a good example for a pro tip on push, rather than a default). In a similar vein, a pro-tip for high memory settings to have the user set "--max_old_space_size" seems good. It is less clear to me why setting "--max_old_space_size" for low memory settings is helpful, in my experimentation the only effect was to change how the app crashed, at which point adding a node setting to crash seems less useful than just letting the CF values force a crash. (Admittedly the node crash output had more data in it, but was not immediately more useful than a simple "out of memory error") Is your experience different than the above? |
Two reasons:
I'm not excluding that we can additionally make use of [1] as mentioned previously, two different teams got bitten by lack of autotuning; one of them had a very small "live memory" set but their app still managed to OOM-crash due to node's lazy GC |
fyi |
@dgodd Could we add an environment variable that enables these optimizations (aggressively, but with conservative space sizes) based on the container limits? Then if we find most people set the environment variable and don't have issues, we can enable it by default. |
I'm happy with that suggestion
…On Wed, May 10, 2017, 12:36 PM Stephen Levine ***@***.***> wrote:
@dgodd <https://github.com/dgodd> Could we add an environment variable
that enables these optimizations (aggressively, but with conservative space
sizes) based on the container limits? Then if we find most people set the
environment variable and don't have issues, we can enable it by default.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#82 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAHtTb1NswT2BDrkP8QXhgKbVC4YNphLks5r4eeIgaJpZM4NAUVx>
.
|
set it to 75% of the container's memory limit should improve GC behavior in low memory environments see: - cloudfoundry/nodejs-buildpack#82 - nodejs/node#2738
Prioritized adding the env var as described. |
On Note that we went with:
Don't hesitate to open an issue if you have feedback about these settings. We didn't introduce |
@sclevine @nikitarathi23 @dgodd something's funny in that commit: fd5c9ee#diff-00ca2312048696e35dce358a85e315b9R86 (it's just a test description but...) Also the commit doesn't seem to include any documentation change. How are users supposed to discover this? |
Hi @CAFxX, I rejected the story with a request for both of those changes yesterday, see: https://www.pivotaltracker.com/n/projects/1042066/stories/143873083 The functionality is present and appears to work, so I resolved this issue without waiting for them. |
Proposed change
As previously discussed in #34 (and tangentially in #29) it would be nice to have the buildpack do something "smart" in the default case (a-la java buildpack) by setting the
--max-old-space-size
v8 option to something conservative (75% of the container memory quota).The autotuned default can be overridden in two ways:
Procfile
, the heap autotuning is disabled (this means that apps currently setting--max-old-space-size
manually are not affected by this change)Procfile
it is possible to provide additional arguments to npm by setting theNPM_CLI_OPTIONS
environment variable. The value of this variable is appended to the npm invocation, thereby allowing to override the autotuned value.The rationale for choosing 75% is twofold:
Use cases
By default node runs with a max "old space" size of 1.4GB. This means that in practice warm apps use 1.6~1.7GB (as reported by
cf app
).For containers smaller than 1.7GB, this means that the applications are likely to be crash due to OOM. For containers larger than 1.7GB, this means that the extra memory won't be used.
I have viewed signed and have submitted the Contributor License Agreement
I have made this pull request to the
develop
branchI have added an integration test (two actually)