Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Configuration] Limit the memory usage of influxdb #38

Closed
FGRibreau opened this issue Nov 12, 2013 · 46 comments
Closed

[Configuration] Limit the memory usage of influxdb #38

FGRibreau opened this issue Nov 12, 2013 · 46 comments

Comments

@FGRibreau
Copy link

Hi guys!

I don't see any configuration option to limit the memory usage of influxdb, how can I achieve that?

If it's currently not configurable, it's definitely my first concern right now (in order to use it as a shadow-slave in production).

@pauldix
Copy link
Member

pauldix commented Nov 12, 2013

It's not configurable right now, but we should definitely support that.
It'll be a little tricky because there's the LevelDB cache and then there's
stuff in Go. At the very least we can make the LevelDB cache limited, which
is probably where the vast majority of the memory usage will be. We can
also add setrlimit, but that might cause things to crash once it goes over.
More on this soon.

On Tue, Nov 12, 2013 at 1:43 PM, Francois-Guillaume Ribreau <
notifications@github.com> wrote:

Hi guys!

I don't see any configuration option to limit the memory usage of
influxdb, how can I achieve that?

If it's currently not configurable, it's definitely my first concern right
now (in order to use it as a shadow-slave in production).


Reply to this email directly or view it on GitHubhttps://github.com//issues/38
.

@FGRibreau
Copy link
Author

Thanks! Keep me posted!

@predictivematch
Copy link

Hi, Any update on this, is this still on the roadmap?

@pauldix
Copy link
Member

pauldix commented Jan 27, 2014

It is, I just assigned it to v0.6.0 which we're targeting for release in mid to late February.

@schmurfy
Copy link
Contributor

for me 0.5.0 seems to reduce the issue a bit, influxdb now use a lot less memory on startup and manage to stabilize itself around 580MB, it stayed that way the whole night but as soon as I queried the data back this morning to check how it went it started growing again.

The memory growth rate seems slower than with 0.4.0, I will keep a close look on it to see how it goes but if there is not real way to control how much memory it can uses it will be hard to use it in production.

is there any config options beside limiting the number of openened leveldb databases which could help ? (the number of leveldb databases open does not seem to have a great impact)

Edit: Here are the results for last night and this morning:
influxdb_0 5 0_memory

@pauldix
Copy link
Member

pauldix commented Feb 27, 2014

The leveldb settings are all that exist at the moment. What queries are you
running? Does the memory only spike while it runs or does it stay high?

Specific limits is one of the priority items for 0.6.0, but I think having
reasonable memory usage is a blocker for 0.5.0 general release.

On Thu, Feb 27, 2014 at 4:47 AM, Julien Ammous notifications@github.comwrote:

for me 0.5.0 seems to reduce the issue a bit, influxdb now use a lot less
memory on startup and manage to stabilize itself around 580MB, it stayed
that way the whole night but as soon as I queried the data back this
morning to check how it went it started growing again.

The memory growth rate seems slower than with 0.4.0, I will keep a close
look on it to see how it goes but if there is not real way to control how
much memory it can uses it will be hard to use it in production.

is there any config options beside limiting the number of openened leveldb
databases which could help ? (the number of leveldb databases open does not
seem to have a great impact)

Reply to this email directly or view it on GitHubhttps://github.com//issues/38#issuecomment-36225917
.

@schmurfy
Copy link
Contributor

since I reduced the number of fields in my records memory usage are lower but my concern is more than all I see is memory usage increasing, never decreasing.

@schmurfy
Copy link
Contributor

What does influxdb use all this memory for btw ?
From what I understand from the different discussions leveldb itself only use a small amount of memory as cache for each openened database which would means that most if not all the memory is used by influxdb itself right ?

My influxdb instances (both the production and test servers) are currently sitting around 1GB which is quite a lot of memory.

As a comparison I can look at our production mysql server which uses 126MB and currently stores a lot more data than influxdb.

@pauldix
Copy link
Member

pauldix commented Feb 28, 2014

The Go part of the process shouldn't be using much. The only stuff it keeps in memory when not running queries is the cluster metadata. When you run queries, things get pulled into memory. We're going to be optimizing some of that in the coming weeks.

We can probably reduce the profile by tweaking how we're working with LevelDB. We'll add some settings for this in rc.3 and see if we can get this to a more reasonable number. This is high priority for us :)

@jvshahid
Copy link
Contributor

@schmurfy can you explain what metric are you monitoring, is it virtual memory or resident memory ? Also, do you know or can guess what happened around 9:00 that triggered the memory to increase.

@FGRibreau
Copy link
Author

We can probably reduce the profile by tweaking how we're working with LevelDB. We'll add some settings for this in rc.3 and see if we can get this to a more reasonable number. This is high priority for us :)

Great news! 👍

@schmurfy
Copy link
Contributor

what caused the memory to increase is me loading the dashboard (which issues select queries to gather the data), the memory reported here is measured by libsigar and from different testing this is the "physical" memory used (mac os x calls it real memory which is the best name I have seen).
This value is also the one shown by top in the RES column.

My experience so far with 0.5.0 is similar to 0.4.0 but the later used more memory from the start and its use increased dramatically when the data were queried, with 0.5.0 it stills shows similar behavior but all the numbers are lower but I have no run it for anough time to really be sure, the week end should be a good test :)

One of the thing I am particularly interested is: does this memory usage ever goes down ?
I will see how it goes during the week end.

@schmurfy
Copy link
Contributor

schmurfy commented Mar 3, 2014

Here is another one for the last three days:
influxdb_0 5 0_memory3days

they are downscaled values but it appears memory consumption does decrease but really slowly.
and like before as soon as I query the data the memory steps up:

screenshot_143

@pauldix pauldix modified the milestones: 0.5.0, 0.6.0 Mar 3, 2014
@jvshahid
Copy link
Contributor

jvshahid commented Mar 3, 2014

This commit 1565e55 fixed one memory leak, we'll continue testing and close this issue once the memory leak is gone and #286 and #287 are closed.

@jvshahid
Copy link
Contributor

jvshahid commented Mar 3, 2014

Issues #286 and #287 are closed. We're investigating a possible memory leak so we won't close this issue yet. However, we're going to release rc3 very soon, please give it a try and send us your feedback.

@schmurfy
Copy link
Contributor

schmurfy commented Mar 4, 2014

nice, I will deploy rc3 right away !

@schmurfy
Copy link
Contributor

schmurfy commented Mar 5, 2014

the news are really good :)
Since I deployed the rc3 the memory started by slowly rising until around 240MB and mostly stayed there, I tried to do a lot of queries this mornring but memory usage rose to 295MB and later dropped slightly.

It looks like there is some memory leaked but nothing like before, going from 2GB to ~300MB is a really nice change.

@pauldix
Copy link
Member

pauldix commented Mar 11, 2014

You mentioned on the list that the memory usage is fairly flat at this point. So closing this one for now. Let me know if you think there's something else that's actionable that we can do.

@pauldix pauldix closed this as completed Mar 11, 2014
@schmurfy
Copy link
Contributor

yeah the memory usage is now fairly stable, the memory used when running queries seems to be freed after and the memory usage is now nearly a straight line on my servers.

jvshahid pushed a commit that referenced this issue Aug 12, 2014
change log ouput format to Lmicroseconds
@guruprasadar
Copy link

I am using latest influxdb (0.10). Influxdb size id around 1.5 GB. Memory it is consuming more than 6GB. Is any issue with this. My db size will grow continuously now i am worried may be i will get out of memory.

Below is the top output for influx db
11222 influxdb 20 0 9593m 6.8g 343m S 0.7 44.0 24:59.95 influxd

du -sh influxdb
1.5G influxdb

@DavidSoong128
Copy link

what is the result now about the question? , if someone know that,please tell me, thank you

@pistonsky
Copy link

pistonsky commented Oct 12, 2016

I had the same issue with memory overload.

I was using influxdb in a docker container from tutum/influxdb, influxdb version was 0.9.6.1. Whenever I ran a query with GROUP BY time(1h) for the period of 7 days (it is about 10K points to process), my influxdb container consumed all available memory (~4GB) and got killed by linux kernel.

Then I switched to influxdb container (not tutum/influxdb), which had the version 1.0.1, and the problem was gone.

Just use the official docker image influxdb!

@pistonsky
Copy link

Unfortunately, I have to update this.

The new influxdb 1.0.1 also fails. After working more or less stable for couple hours, sometimes even lowering memory to ~400MB, after some bigger query with larger timeframe in GROUP BY clause, it finally failed again, being killed by kernel just like 0.9.6.1.

Please suggest a solution!

@psinghsp
Copy link

psinghsp commented May 4, 2017

Same problem here with latest 1.2. I can only use this product if it can restrain itself within the limits of the memory I allocate to it. I should be able to trade off "time" for "memory" consumption instead of influxd just getting killed by the Linux kernel.

May be I am not configuring influxd properly. Can someone please comment?

@tau0
Copy link

tau0 commented May 26, 2017

screen shot 2017-05-26 at 22 35 50

It consumed all available memory and then crashed. :(

PS: 1.2.2, 1.2.4 and current nightly build.

@helwodlrol
Copy link

helwodlrol commented Jun 12, 2017

how did you solve this problem? @tau0

@lukaszgryglicki
Copy link

What is the status of this?
I have influxd process consuming 29G of 32G RAM and it failed to copy one database to another due to lack of memory recently...
Is there a way to limit influxd RAM usage?

@kevinpattison
Copy link

I'll add a 2018 entry to this 2013 serious bug report. Please can this be addressed. InfluxDB crashing during the night, I wake up and restart it but I've lost 8 hours data forever.

Ridiculous that process this mature cannot manage it's own memory usage or recover from a query going OOM.

@kevinpattison
Copy link

Just to clarify, the original query is still valid. This was closed by pauldix when a memory leak was resolved, but that was not the original request.

@Alexadra
Copy link

Alexadra commented Jun 6, 2019

What is the status of this issue? We still experience the same for the latest influxdb version

@guruprasadar
Copy link

guruprasadar commented Jun 6, 2019 via email

@SniperCZE
Copy link

Hello, we're facing same problem - influx consumes all available memory and OOM shot it. Please reopen and implement some maximal limit.

@xiandong79
Copy link

facing the problem~

@anandkp92
Copy link

Still facing the issue.
Is there no configuration that can be set to limit the RAM use?

@iBug
Copy link

iBug commented Oct 21, 2019

My influxd is constantly dying of OOM. I worked around this issue by modifying the systemd config for InfluxDB using the following command:

systemctl edit influxdb.service

Write the following content into the editor that shows up, save and quit.

[Service]
MemoryMax=128M

This way the InfluxDB daemon has its memory limited to 128 MiB and OOM should kick in when it exceeds the limit.

Still only a workaround, not a solution, but the above trick does at least prevent other important services from being killed by killing influxd early.

Note that you should set the value according to your own configuration. My influxd typically uses less than 40 MB of RAM so setting a death limit to 128 MiB seems reasonable.

@Doc999tor
Copy link

Any updates with this issue?

@joshuaboniface
Copy link

Still an issue with 1.7.10. High volume queries from e.g. Grafana frontends cause the Influx memory usage to balloon out of control then OOM. RAM is not unlimited - please implement some sort of configurable memory limits in Influx so that queries cannot OOM the database!

@DanS123
Copy link

DanS123 commented Aug 18, 2020

Having the same issue here.
Small cardinality on the database in question.
Using Grafana to query, and any response that returns a lot of values will quickly ramp RAM up to 100% and then crash the container.
Worse still, when InfluxDB crashes it doesn't cause the container to crash, so our restart=always does not cause a reboot of the container. Whenever this happens a simple restart of the container will get everything back to normal.

@amiryeshurun
Copy link

I am facing this issue as well.
Using influxdb for monitoring (with Prometheus) is very powerful, but limiting the memory is necessary. Monitoring the monitoring infrastructure is ridiculous. Please help.

@Fantaztig
Copy link

InfluxDB OSS 2.0 seems to include memory limits for containers.
My guess is we have to wait for the release as the value of fixing it for the current version seems kinda low.

@dimisjim
Copy link

@Fantaztig
Copy link

@dimisjim yes, I only glanced over the config options and took it for a total memory limit
So the problem remains open I guess 😅

@Jongy
Copy link

Jongy commented Sep 23, 2020

Adding myself to the party... Would also be happy to see a solution for this.

@fkusei
Copy link

fkusei commented Oct 7, 2020

InfluxDB memory usage is becoming a major concern here. It's currently using about 44GB of memory (having a database of around 650GB).

@oliviertm
Copy link

oliviertm commented Nov 17, 2020

I have the same problem, but after some tests including export, stop, start, version update (from 1.7.6 to 1.8.3) of influxdb and database drop and import, I came to the conclusion that this doesn't looks like a memory leak, but a consequence of the database structure.
This is quite well explain in this post : https://help.aiven.io/en/articles/1640634-influxdb-series-cardinality and also in this documentation https://docs.influxdata.com/influxdb/v1.4/guides/hardware_sizing/#when-do-i-need-more-ram
With the 1.8 version of the previous page, it is stated than a cadinality of more than 10 millions may lead to OOM error. The cardinality of the database can be estimated by Influx with this command:
show series cardinality

@JackThird
Copy link

The first question was never resolved and ticket was closed, can I understand why?
Since 2013, there is no way to limit influxdb memory, right?

mgattozzi pushed a commit that referenced this issue Sep 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests