Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Geth is unreasonably reducing the cache size ignoring the command line #17791

Closed
stricq opened this issue Sep 30, 2018 · 9 comments
Closed

Geth is unreasonably reducing the cache size ignoring the command line #17791

stricq opened this issue Sep 30, 2018 · 9 comments

Comments

@stricq
Copy link

stricq commented Sep 30, 2018

System information

Geth version: 1.8.15-stable
OS & Version: Windows 10 16G RAM

Expected behaviour

Geth should respect the amount of cache provided on the command line.

Actual behaviour

Geth is unreasonably reducing the cache size even though there is more than enough RAM on the PC.

WARN [09-30|00:36:17.286] Sanitizing cache to Go's GC limits provided=8192 updated=5461

Steps to reproduce the behaviour

Add a cache size on the command line at least 8192. Notice Geth drops the size regardless of available free RAM on the machine.

Also, in previous versions I saw Geth running perfectly fine when specifying a cache as large as 32768 (a different PC), now it won't go any larger than 5461.

With the larger cache size, Geth was always able to sync and keep up with the current block chain. Now that it won't go any larger than 5461, it cannot keep up and falls farther and farther behind. The blockchain builds faster than Geth can sync.

@FreekPaans
Copy link

This seems to be related to #16800.

However, I think geth is a bit too conservative now, which results in memory going to waste on my machines. It would be nice if we could tune this a little bit.

@xakepp35
Copy link

xakepp35 commented Jan 14, 2019

Confirming.

I have 48gb ram and geth 1.9.0 testing build.
I've tried to allocate 32gb, got error message:

Sanitizing cache to Go's GC limits provided=32768 updated=16381

Blockchain is sized more than 150gb as per today (164 641 936 292 bytes)
So everything below this limit could be considered sane and not required additional "sanitizing", especially if i paid for ram-backed blockchain and "fat" optics channel!
Rewrite your old crap to modern, speedy and limitless C++. its 2019s, guys! (Actually thinking on how to get rid of GO/Java/JS/.NET in favor of speed, enerhy and memory efficiency - its BACKEND, and BACKEND MUST be written and tuned in assembly, but not on "qbasic"-like slow, inefficient interpreter stuff! :) :)

@holiman
Copy link
Contributor

holiman commented Dec 5, 2019

Setting the cache allowance near half your available memory causes problems, due to how the golang memory allocation and garbage collection works.

@holiman
Copy link
Contributor

holiman commented Dec 5, 2019

So, in other words, it's not "unreasonably reducing the cache size" -- the reason is, that otherwise nodes get hit by OOM.

@holiman holiman closed this as completed Dec 5, 2019
@FreekPaans
Copy link

Setting the cache allowance near half your available memory causes problems, due to how the golang memory allocation and garbage collection works.

Maybe this can be documented a bit so people don't run into this or have questions about this. I personally don't know "how the golang memory allocation and garbage collection works", all I see is unused memory on my nodes.

@stricq
Copy link
Author

stricq commented Dec 5, 2019

In my case I wasn't even setting it to 1/4 of available RAM and it still reduced it. Your "answer" does not address the question. It's a moot point anyway, I stopped running geth when this issue popped up and I don't plan on using it ever again. Because of this issue.

@nuliknol
Copy link

In my case I wasn't even setting it to 1/4 of available RAM and it still reduced it. Your "answer" does not address the question. It's a moot point anyway, I stopped running geth when this issue popped up and I don't plan on using it ever again. Because of this issue.

hmmm, yeah. Unbelievable waste of resources despite living in 21 century. I have ordered 128Gigs of RAM for my development archival node, and now I am going to waste 64 gigs of ram because Golang doubles memory by its garbage collector!!! The only solution that comes to mind is that I could use RAM disk to cache the data on disk. I have no words. If a gas station would would trow away half of the gas while dispatching it to the cars it would go out of business immediately, but here in IT industry we can throw away hardware left and right like nothing happened.

@holiman
Copy link
Contributor

holiman commented Jan 16, 2021

going to waste 64 gigs of ram because Golang doubles memory by its garbage collector!!! The only solution that comes to mind is that I could use RAM disk to cache the data on disk

That's not how it works. Just because geth doesn't immediately allocate / lay claim to the entire memory bank, it doesn't mean that the rest sits idle. What happens is that the remainder is left for the OS to use wherever is needed.

Since geth is a very IO-heavy process, a lot of that extra memory will be used for file system caches, making it so that geth doesn't actually have to touch disk very often at all, neither for read nor write. It's even been voiced that app-layer caches should be avoided, in preference of OS-level file system caching.

Unbelievable waste of resources despite living in 21 century.

Aye, but back in the 16th century, they didn't have the finer points of mark-and-sweep GC figured out yet, so cut them some slack :)

@nuliknol
Copy link

nuliknol commented Jan 16, 2021

@holiman

That's not how it works. Just because geth doesn't immediately allocate / lay claim to the entire memory bank, it doesn't mean that the rest sits idle. What happens is that the remainder is left for the OS to use wherever is needed.

hmm , that's interesting! So, do you think anything will break if I just comment this code line out in cmd/geth/main.go ? Something is telling me that I have to try it. I mean, if it starts swapping memory to disk in and out I am going to see it immediately in iostat so, no worries. I will just restart with lower limit.

in preference of OS-level file system caching.

but we don't need that, we need trie caching, not disk block caching. Disk block is 4k bytes, a state object should be under 256 bytes so, its a big difference in memory usage where to cache 4k objects or 256 byte objects.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants