-
Notifications
You must be signed in to change notification settings - Fork 909
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kibana shuts down over time: out of memory? #17
Comments
@jchannon OK, just started a container, I'm monitoring the (idle) container using New Relic and tracking memory usage and next victim of |
I think it took mine from 24-36 hours On Wednesday, 20 January 2016, Sébastien Pujadas notifications@github.com
|
Goodness me! Right, I'd better restart the container with much less memory available to begin with! |
Still running…
Elastisearch's and Logstash's memory usage is fairly constant (and even somewhat decreasing). Candidate process for
So at this point, Kibana's cyclic and increasing sawtooth memory usage trend suggests that it will ultimately make the container run out of memory and be killed by Will leave the container running for the day and run more tests this evening with |
I've just started it up and its sitting at 459MB of 1.023 GB, will keep any eye. What tool did you use to get those graphs? 😄 |
I'm using New Relic to get the graphs: it's SaaS so no server to set up on my side, just a client-side agent to |
Thanks I saw ours rise to 565mb of 1023mb but no crash yet On Thursday, 21 January 2016, Sébastien Pujadas notifications@github.com
|
After about 10 hours, memory usage kind of looks better than it did during the previous test.
Kibana's behaviour seems reasonable (currently peaking at 240MB), but there is an upward trend in the memory usage, so let's see how this goes during the next few hours. Again, Kibana is the top candidate for |
Yup my Kibana fell over during the night. Although I don't have anything Thanks On 22 January 2016 at 06:40, Sébastien Pujadas notifications@github.com
|
Brill. What do you think the issue is? On Friday, 22 January 2016, Sébastien Pujadas notifications@github.com
|
According to elastic/kibana#5170, the most likely explanation is that Kibana's underlying NodeJS is failing to collect garbage properly, which might be due to NodeJS getting confused by Docker and not being able to figure out how much memory is actually available. Solved by forcing garbage collection when the heap reaches 250MB. |
Ah interesting. Thanks for the help. Will pull it down and try On Friday, 22 January 2016, Sébastien Pujadas notifications@github.com
|
Have pulled it and deployed. One thing I noticed, I ran docker stats elk On 22 January 2016 at 19:59, Jonathan Channon jonathan.channon@gmail.com
|
Tried it, same here, but nothing too dramatic, and eventually dropped to the initial level (garbage collection kicking in perhaps?). When left alone |
Kibana seems to be at 22% mem usage of the container and RSS is docker stats elk reports 642mb although I can't seem to get that to drop although once I have at arounf 670mb it doesnt seem to go higher and the RSS seems stable. I'll leave it overnight and see what state its in |
Just checked and all values are not really higher than yesterday so fingers crossed. I'll keep it running over the weekend and check Monday morning to see if its still up. Thanks for the awesome help 👍 |
All righty! Cheers! |
Stil up so its looking good 😄 |
Cool! Same over here. I'll leave this issue open for a few more days and if everything continues playing nicely I'll close it. |
Brill, thanks for the support On 25 January 2016 at 09:21, Sébastien Pujadas notifications@github.com
|
Its stil up so I think we're good! 👍 😄 |
😃 Thanks so much for your feedback, same behaviour here, so… closing the issue! |
Hey guys, I'm experiencing the same behavior. Could you point out a way to define when garbage collection should be triggered on my server? or node instance? what should I configure for this to happen? |
I dont think you need to do anything if you have pulled the latest image |
@jalagrange Are you experiencing this behaviour with the latest version of the image? This should have been solved by aaa09d3 that I published a few weeks ago (by the way, if you take a look at that specific commit, you'll see how I configured garbage collection, you'll also want to have a look at elastic/kibana#5170 for background information on this issue). Also, how much memory are you dedicating to the container? |
Wao thanks guys, that was quick... I am actually running kibana 4.3.1 directly on an CentOS server that connects to a remote Elastic Search cluster, not using docker. But take a look at my node memory usage in 3 hours without any type of usage, it sounds very similar to what you guys are describing: I am currently running this on an AWS micro instance so 1GB of memory. |
Ah yes, looks familiar. elastic/kibana#5170 is what you want to have a look at for the non-Docker version of the issue (long story short: setting |
Thanks a lot @spujadas ! I did just that and took a look at the issue you mentioned. I'm pretty confident it will work but I'l post back in case it doesn't. Just to expand your reply, NODE_OPTIONS="--max-old-space-size=250" Must be set at the beginning of bin/kibana that is being executed, 250 being the number of MB you wish to cap the process at. (in case someone else stumbles onto this) |
Yup, our kibana on Ubuntu is constantly crashing because of this as well. |
See #16 for background.
The text was updated successfully, but these errors were encountered: