Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High memory usage? #96

Open
jackyaz opened this issue Sep 17, 2024 · 2 comments
Open

High memory usage? #96

jackyaz opened this issue Sep 17, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@jackyaz
Copy link

jackyaz commented Sep 17, 2024

bedrocikifierd seems to use 1GB memory, is that expected?

@jackyaz jackyaz added the bug Something isn't working label Sep 17, 2024
@Kaiede
Copy link
Owner

Kaiede commented Sep 17, 2024

Couple questions:

  1. How are you measuring? VSZ can be somewhat misleading depending on what's actually going on. It's a representation of the virtual memory address space, and not always real memory.
  2. How do you have things configured?

I took a quick look at my install for reference, which backs up two different servers to a NAS for me, using SSH rather than Docker to communicate with the servers. RSS (resident memory) looks normal at about 45MB. I do also see 1GB for VSZ, but digging in further, about half of this is for pages that have zero actual impact on the system other than the virtual address mapping existing. They aren't resident in RAM, have no dirty bytes, and aren't even listed as impacting swap. They only exist as an entry in the process' memory table.

Digging in further, the remaining gap seems to be pages that aren't dirty. So there will be a rather sizable memory mapping done (~64MB), only a fraction of that will be in use. So while the vast majority of this is empty, it counts the full 64MB entry against VSZ.

So three things I think can be taken away from this:

  1. RSS/USS is going to measure the actual impact on memory, and seems to handle steady state rather well at least. No clear leaks. That said, I'd need to dig in further to see exactly how the memory usage breaks down at some point to see if the memory usage can be brought down a bit.
  2. Something about the underlying allocator (glibc) and Swift is causing the higher VSZ reporting. Unfortunately, it also seems to lead to Linux marking quite a bit of the swap file (400MB in my case) as 'used' when there isn't anything there worth keeping in swap. It's 'clean'.
  3. The underlying allocator is also leading to heavily fragmented virtual memory space. I am not sure if this is because allocations are getting padded for security or something else. This is deep into how Swift itself calls into the allocator, and not something I've had to investigate deeply on Linux before.

I can see if one of the other allocators available does better in this scenario.

@jackyaz
Copy link
Author

jackyaz commented Sep 17, 2024

I observed the 1GB usage in the Stats tab of Portainer for the container, so however that messages usage. Noticed the container had been up for about a month so restarted it and usage dropped down to about 64MB so I don't know if there's some kind of leak or Portainer is doing dodgy reporting!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants