-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak in server? #5170
Comments
I am seeing the same. Memory consumption keeps increasing. I had to restart it at around 1.2 GB. |
What exactly occurs? Does the oom_killer slay the process? While its possible there could be a memory leak, the server doesn't do all that much. It possible this is just cache. |
I have one instance running for 4 days and 18 hours:
top shows:
proc memory statistics:
According to the documentation:
In my case, it didn't get to a point where oom_killer had to step in, as it is running in a 4 GB VM. |
@rashidkpc: We can also confirm the memory leak. We started the dev server locally with esvm also running. No extra config, no custom dashboards, clean slate from freshly cloned kibana repo. We monitored the memory usage of the process for about half an hour (we will let it run some more) and the memory usage is slowly creeping up even without running the kibana UI. It leaks a lot faster if I keep refreshing the UI. So far the amount of memory used by the process has doubled since starting it. I'm not sure what other details we could share. We have seen the memory leak happening on OSX and Linux, nodejs 0.12.7. |
@rashidkpc I can also confirm that freeing up memory doesn't happen. The app container has 384MB memory assigned which slowly keeps filling up until either the CloudFoundry runtime kills the container and starts a new one, or, and this happens more often, the kibana server just silently stops serving requests, breaking the UI. I will try running the app with significantly more memory and see what happens. |
@cromega did 4.1.x behave differently? 4.2 switched to node 0.12 which is slightly more relaxed about GCing. I'm not familiar with how cloud foundry works unfortunately. |
@rashidkpc: I am running 4.2-beta2, before that I used a 4.2-snapshot. I have no experience with 4.1. Thanks for the info, I will play around with different node versions. |
@rashidkpc: The memory footprint of 4.1.2 (103 MB) is smaller than 4.2.0-beta2 (473 MB). top for 4.1.2 (uptime 1-03:33:45):
top for 4.2.0-beta2 (uptime 6-00:28:29):
This last one is the same instance as my previous comment, so (resident) memory consumption has indeed decreased as you suggested above, from 1.1 GB to 473 MB. |
I'm not running Kibana as a Cloud Foundry app but am seeing the same issue 4.2.0 (final release). Kibana is using 646m of memory. Surely Kibana shouldn't use that much memory?
Update: a 8 hours later, it has grown to this:
Running on Debian 7.9 |
As a test, I restarted Kibana and left it running on the server overnight without logging into it at all on the browser. Memory usage still grows nonetheless.
|
I'm having the same problem running in a Docker container with 4.2.0 and Elasticsearch 2.0.0. No problem with 4.1.2. 1GiB OOM killed after approximately 14 hours. |
Hello, Same problem here. After taking a better look to the top command, I've noticed that the node process was taking about 40%of the available RAM, meanwhile ES and logstash remained stable at ~40% and 7.5% (check below) On the previous version of kibana (4.1.2) wit ES 1.7.2 and Logstash 1.5.4 I had no problem, the processes were running with no problems for 1 month until the upgrade. ES was configured with the recommended settings: https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html I'm going to leave it running for the rest of the day to check on the memory usage for each process, I'll update the thread later Thanks for the help |
Hello After leaving the services running all night today morning I have this scenario: The node memory usage climbed to 40% of the server RAM. Best regards |
Hello, The first part of the graph show the behaviour with Kibana 4.1. The top green line is the memory threshold of the container. Each time Kibana reach the memory limit, it gets an OOM. As we can see on the graph, we have increased a few time the memory limit up to 1,5GB with no change of behaviour. |
@rashidkpc - You've labeled this What development / test environment do you guys use - perhaps those of us on this ticket could put together something that replicates the error in your environment? |
@mrdavidlaing we primarily use the chrome dev tools for memory profiling. I'm not sure how Rashid was collecting his snapshots (perhaps by signalling the node process) but I was doing the same by running the server with iron-node. npm install -g iron-node
cd ~/dev/kibana
iron-node ./src/cli With iron-node you can use the memory profiling tools to collect snapshots and see the items that are not being garbage collected, or collect memory over time and cause constant garbage collection. These types of metrics are a lot more meaningful than the total process memory level since v8 will only do light garbage collection until the it detects serious memory pressure. |
Curiosity got the better of me so I left the node process growing just to see what would happen. It started consuming just about all free memory on the instance (not cache memory but Active memory). IO Wait hit 100% as the OS tried to deal with the low memory situation. I had to restart node before the OOM killer kicked in. |
Just for the record, we're running Kibana 4.2.0 on ubuntu 12.04 and we see the same behaviour. At startup, the kibana node process takes ~100mb (resident size seen from Our usage pattern is very light, I'm the only user of the kibana installation at the moment and I don't have any dashboard open. I've enabled verbose logging, it logs the memory usage every 5 seconds, I'll post it later. RSS size as reported from |
I'm running it on Debian 7.9 (64-bit), in an EC2 instance. |
Having the same issue reported by other people here with Kibana 4.2.0. Memory usage on the server is going up with a linear pattern. |
Ubuntu 12.04 on ec2.
|
I have OOM killer log (VM had 2GB RAM): [9827667.988558] Out of memory in UB 5517: OOM killed process 8709 (node) score 0 vm:5478276kB, rss:1712056kB, swap:1977324kB Now, server has 6GB RAM and kibana 4.2.0 with sense and marvel app goes up to 6GB RAM after 24hours. |
Using the --max-old-space-size=150 seems to fix the problem using docker. |
Looks like --max-old-space-size is working nicely, thanks @mrdavidlaing |
I'm also happy to report that the PR from @mrdavidlaing also fixes the memory leak for us. 👍 |
Can we get this fix into a release please? I've noticed that bin/kibana in the repo still doesn't limit memory usage. |
We've also recently run into a similar issue with Kibana 4.3. Edit: Will try the max-old-space-size fix. |
Same issue here: Kibana 4.3 in Docker 1.6.2 on AWS EC2 |
Same here, in my case I had a t2.micro instance running both Kibana 4.3 and ElasticSearch 2.1 With this particular setup ElasticSearch ended up crashing (presumably by the OOM killer) but it seems that the issue was with the nodejs application @mrdavidlaing saved the day for me :) Any plans for a 4.3.1 with this patch any soon? 👍 |
--max-old-space-size=250 appears to fix it for me, process size seems to hover around 300 MB, give or take a few, no more OOM. Using Kibana 4.2.1 |
I'm closing this because #5451 addresses this issue by allowing you to explicitly set node's GC settings via a FWIW, in a low memory environment you should be able to prevent OOM errors by starting using:
Adjusting |
Adding for posterity: nodejs/node#2683 |
Given this issue was reproducible, perhaps the "not reproducible" tag should be removed? |
See latest comment in #5451, the systemd service is broken due to this. Who thought |
@ageis, not us. Make sure you're using our APT repository. |
I'm running Kibana 4.2.beta2 as a Cloud Foundry app with 384MB of memory assigned to each app instance container.
I'm querying against a fairly small elasticsearch cluster, at a rate of about 20 requests / minute - basically very little load.
There appears be be some form of memory leak that causes the server side nodejs portion of Kibana to gradually consume more and more memory until it exhausts the 384M of memory allocated to it in the container.
Couple of follow on questions:
Thanks!
The text was updated successfully, but these errors were encountered: