Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

memory consumption after upgrade to 3.4.9 #812

Closed
4 tasks done
jgantenberg opened this issue Apr 26, 2022 · 7 comments
Closed
4 tasks done

memory consumption after upgrade to 3.4.9 #812

jgantenberg opened this issue Apr 26, 2022 · 7 comments

Comments

@jgantenberg
Copy link

jgantenberg commented Apr 26, 2022

Checklist

  • I checked other issues already, but found no answer/solution
  • I checked the documentation and wiki, but found no answer/solution
  • I am running the latest version and the issue still occurs
  • I am sure that this issue is about SteVe (and not about the charging station software or something unrelated to SteVe)

Specifications

SteVe Version : 3.4.9
Operating system : Ubuntu 20.4
JDK : Zulu11.54+25-CA
Database : MariaDB 10


after upgrade (and fix of local configuration errors) steve is working stable with my wallbox. But the java process has a real huge memory footprint.
After 15 minutes runtime the process allocates constantly 1.5 GB memory variing slightly dependant from the background action. With version 3.4.8 it was constantly around 500 MB. What causes additional memory consumption? Is it only my setup? 


@goekay
Copy link
Member

goekay commented Apr 26, 2022

can you please describe your setup and context? how many charging stations and users? how busy are the stations and users? are the stations soap or ws/json stations? if they are ws/json, do they disconnect and reconnect a lot?

@jgantenberg
Copy link
Author

its my test environment at home with two test users, some rfid cards and one station. The station is ws/json 1.6, not disconnecting at all, maybe once or twice a day. The station is idle atm. So only traffic is frequent StatusNotification, Heartbeat (300 secs) and meter values every 15 minutes. No idea whats happening in the background. No real work...
Today the underlaying java process remained at 1 GB for several hours and all of a sudden raised to nearly 2 GB leading to swap and one disconnect. After reconnect the memory went down to 1.7 GB and is again at about 2 GB. No obvious reason for that....

goekay added a commit that referenced this issue Apr 27, 2022
@goekay
Copy link
Member

goekay commented Apr 27, 2022

i did some load testing and monitored the memory usage using the tests in referenced commit.

environment

  • 2021 mac book pro with apple m1 pro and 32 gb memory
  • java 11.0.14 (Zulu11.54+23-CA)
  • mariadb via docker (using our docker-compose config)
  • HEAD of the repo. mvn clean package and then java -jar target/steve.jar (obviously HEAD differs from 3.4.9 release. there were some dependency updates, but i assume these to be irrelevant for this study)
  • monitoring heap using VisualVM

prerequisite

insert a chargeboxid issue812 and an ocpptagid user1 into db.

test setup

one charging station, one connection. 3 different actions/messages to randomly choose from: BootNotification, StatusNotification, Authorize.

test 1: Issue812_50Mins

send one message, wait 1 minute, send another.... repeat for 50 times (or for approx. 50 minutes).

used heap size indeed increases because we generate lots of garbage, but from time to time GC kicks in to clean up. after a while, it becomes the common zig-zag pattern of java applications. nothing out of ordinary if you ask me.

Screenshot 2022-04-27 at 20 35 23

test 2: Issue812_ConnectDisconnect

connect, wait 1 second, disconnect. repeat for 200 times. nothing scary.

Screenshot 2022-04-27 at 20 42 01

test 3: Issue812_Crazy

send one random message after another 20,000 times without waiting in-between. this graph is a lot busier and heap size grows due to frequent activity. but... apparently GC does lots of clean-up since it is not a constant climb which would have led to OOM and kill.

i manually triggered a GC before this run to have a clean slate. after waiting for a while when the test finished, i triggered another GC. the used heap returned to the almost same value before the run... from which i derive that there was no memory leak or dangling references.

Screenshot 2022-04-27 at 20 47 31

@jgantenberg
Copy link
Author

jgantenberg commented Apr 29, 2022

I also did some testing with interesting result. I deployed steve to another system, an OrangePi running armbian buster (debian 10). On that machine I have an openJDK java installation. DB connection is the same as on the previous setup. After 24h runtime I can confirm normal behaviour as seen by @goekay, the typical sawtooth pattern with a maximum of below 700 MB. So working stable as a long running process.
So in my opinion there are two possible reasons - the java VM (zulu vs openJDK) or the architecture (x64 vs arm).
As I have another service application (openHAB) running in a zulu VM it is not a general problem of zulu VM. If I find the time, I will make further tests.

@jgantenberg
Copy link
Author

Until now I did not do any further research, but running on my second platform (OrangePi with Armbian buster, openJDK) is stable as long running process. Memory consumption is highly volatile but stable concerning maximum value (750 MB).
Issue will be closed for the moment.

@rdc-Green
Copy link

rdc-Green commented Jun 7, 2022

I have a similar issue. 3.4.9 memory consumption grows to about 12GB over 6-7 days and then crashes.

It is on my server:
Linux 5.13.0-1025-azure on x86_64
Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz, 4 cores
Virtualmin version 7.1-1
Real Memory 15GB

java -version
openjdk version "11.0.15" 2022-04-19
OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.20.04.1)
OpenJDK 64-Bit Server VM (build 11.0.15+10-Ubuntu-0ubuntu0.20.04.1, mixed mode, sharing)

I have about 20 charge points running, mostly Alfen with one Phihong

@rdc-Green
Copy link

rdc-Green commented Jun 7, 2022

I have downgraded to an earlier version of SteVe (3.4.6) and it is now working properly (average memory 650MB)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants