Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Much higher memory usage after upgrading from Debian 9 (19.03.15) to Debian 11 (20.10.17) #1429

Open
bmaehr opened this issue Aug 15, 2022 · 9 comments

Comments

@bmaehr
Copy link

bmaehr commented Aug 15, 2022

I have a single java application running in a docker container on Debian. After upgrading from Debian 9 to Debian 10 or Debian 11 everything was freezing after some minutes of runtime. Even logging in on the VM docker was running on was not possible (https://stackoverflow.com/questions/61748292/debian-10-freezes-on-gcp/73364101#73364101).

After a long search I found out that the amount of memory was the problem. With the older version for the 3.75 GB of the VM even 600MB have been free and 1.8 GB has been used as cache. After the upgrade this all has been gone. I understand that new features use more memory but this increase is insane.

I have seen parameters to limit the memory usage of containers but didn't find any settings to limit the memory (reserved) for dockerd, containerd and docker-proxy.

Debian 9:

top - 20:20:19 up 19 days,  5:26,  1 user,  load average: 0.00, 0.02, 0.00
Tasks: 581 total,   2 running, 579 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.7 us,  2.0 sy,  0.0 ni, 96.4 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem :  3792900 total,   606432 free,  1324184 used,  1862284 buff/cache
KiB Swap:        0 total,        0 free,        0 used.  2149072 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND
18610 pl        20   0 4311004 299216  24976 S  1.7  7.9   1:38.15 java
12040 root      20   0  521776  98516  49468 S  0.0  2.6  20:52.04 dockerd
 4912 root      20   0  577028  50124  27588 S  0.3  1.3  47:04.94 containerd
11048 root      20   0  784808  34884   8644 S  0.0  0.9 112:56.70 stackdriver-col
  635 root      20   0  120976  21484  14664 S  0.0  0.6   2:52.31 google_osconfig
  636 root      20   0  114304  15664   9768 S  0.0  0.4   3:04.19 google_guest_ag
  414 root      20   0   63788   8000   7480 S  0.0  0.2   0:15.13 systemd-journal
    1 root      20   0   57032   6936   5424 S  0.0  0.2   3:29.70 systemd
18765 root      20   0   99316   6900   5924 S  0.0  0.2   0:00.01 sshd
18593 root      20   0  110224   6276   5508 S  0.3  0.2   0:00.97 containerd-shim
 2939 root      20   0   69960   6228   5456 S  0.0  0.2   0:02.95 sshd
18771 bm        20   0   99316   4668   3616 S  0.0  0.1   0:01.49 sshd
19690 root      20   0   43380   4284   3096 R  1.0  0.1   0:23.49 top           
13895 root      20   0  216868   4152   2768 S  0.0  0.1   0:00.00 docker-proxy
14687 root      20   0  216868   4152   2768 S  0.0  0.1   0:00.00 docker-proxy
13290 root      20   0  216868   4148   2764 S  0.0  0.1   0:00.00 docker-proxy
16271 root      20   0  216868   4148   2764 S  0.0  0.1   0:00.00 docker-proxy
13843 root      20   0  216868   4144   2764 S  0.0  0.1   0:00.00 docker-proxy
14519 root      20   0  216868   4144   2764 S  0.0  0.1   0:00.00 docker-proxy
14771 root      20   0  216868   4140   2756 S  0.0  0.1   0:00.00 docker-proxy
15155 root      20   0  216868   4140   2756 S  0.0  0.1   0:00.00 docker-proxy
16319 root      20   0  216868   4140   2756 S  0.0  0.1   0:00.00 docker-proxy

Debian 11 with the same application and even some daemons removed from the start script:

top - 20:01:37 up 11 min,  1 user,  load average: 12.30, 9.47, 5.02
Tasks: 1092 total,   1 running, 1091 sleeping,   0 stopped,   0 zombie
%Cpu(s):  3.8 us,  5.9 sy,  0.0 ni,  0.0 id, 90.0 wa,  0.0 hi,  0.3 si,  0.0 st
MiB Mem :   3678.8 total,    102.9 free,   3548.2 used,     27.7 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.      0.6 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
  15710 bm        20   0 4259996 259932      0 S   0.8   6.9   0:49.71 java
   3403 root      20   0 1391996  45940      0 S   0.5   1.2   0:08.92 dockerd
   3331 root      20   0 1208136  24892      0 S   0.4   0.7   0:02.37 containerd
    419 root      20   0  719664  15192      0 S   0.4   0.4   0:02.18 google_osconfig
    415 root      20   0  715128  12384      0 S   0.4   0.3   0:02.13 google_guest_ag
   8130 root      20   0 1148420   8788      0 S   0.0   0.2   0:00.00 docker-proxy
    426 root      20   0   26536   7132      0 S   0.2   0.2   0:00.63 unattended-upgr
   6665 root      20   0 1148420   6744      0 S   0.0   0.2   0:00.00 docker-proxy
   7125 root      20   0 1148420   6744      0 S   0.0   0.2   0:00.00 docker-proxy
   7910 root      20   0 1148420   6744      0 S   0.0   0.2   0:00.00 docker-proxy
   8145 root      20   0 1148420   6744      0 S   0.0   0.2   0:00.00 docker-proxy
   8825 root      20   0 1148420   6744      0 S   0.0   0.2   0:00.00 docker-proxy
   9210 root      20   0 1148420   6744      0 S   0.0   0.2   0:00.00 docker-proxy
@bmaehr bmaehr changed the title Much higher memory usage after upgrading for Debian 9 (19.03.15) to Debian 11 (20.10.17) Much higher memory usage after upgrading from Debian 9 (19.03.15) to Debian 11 (20.10.17) Aug 15, 2022
@cpuguy83
Copy link
Collaborator

@bmaehr Why do you think docker is using more memory?
Looking at your results it looks to be using less memory.
"VIRT" is not actual memory usage and the increase is most likely related to the go runtime: golang/go#43699

@bmaehr
Copy link
Author

bmaehr commented Jun 22, 2023

@bmaehr Why do you think docker is using more memory? Looking at your results it looks to be using less memory.

Because I needed to give much more memory to the VM to run. I don't know why the memory usage of the whole system increased so much but it needs twice as much memory.
Maybe it is not the dockerd eating up the memory but all the docker-proxy instances (not all visible in the screenshot).

@cpuguy83
Copy link
Collaborator

The memory usage of docker-proxy does seem to have gone up for some reason (likes like ~2MB per process?)

@neersighted
Copy link
Member

How many docker-proxy instances are you running (slash how many ports are you mapping)? @akerouanton is working on spawning fewer docker-proxy instances so we can have less overhead.

Nonetheless, I'm with @cpuguy83 on this likely being unrelated; the only way I can see dockerd/docker-proxy being at fault is if you have enough instances that the increase in RSS is eating all your memory.

@bmaehr
Copy link
Author

bmaehr commented Jun 30, 2023

500 ports (we are running an FTP server and need them for passive mode)

@neersighted
Copy link
Member

Ah, that is a pretty high number of ports. With a RSS increase of ~2MB, that certainly accounts for some of your memory growth.

I don't think we've looked at what may have changed in the docker-proxy to increase memory usage "recently" (read: in the past 4 years), but I think the best solution will be a single copy of the proxy per container.

@cpuguy83
Copy link
Collaborator

My recommendation would be to set --disable-userland-proxy on the daemon so it uses purely iptables for port forwarding.
It looks like RSS went up on your system.
I assume some of the increase is due to sctp support.
Interestingly, SHR is down to 0 for you. This is likely also partly the cause of the higher RSS.

@neersighted
Copy link
Member

SHR being 0 and RSS increasing by a similar amount makes me suspicious about dynamic vs. static linkage in the binary as the potential cause of the memory usage.

@neersighted
Copy link
Member

Related docker-proxy issue: moby/moby#11185

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants