You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I believe the RAM usage depends on the number of connections to your node. I am building a new Full Node and since I am the only one using it, it's at 9GB. My public node is at 74GB with 500-900 connections.
The text was updated successfully, but these errors were encountered:
The Dallas public node has been running for about a year now and runs at 107GB RAM now, the LA node has been on line for about 3 months and runs at 19GB, each one has been averaging about the same amount of connections and have same full node configs.
None of my nodes, dedicated or VPS ever got over 100 connections until today, where I had a node with nearly 125. It was a 64GB RAM, dedicated server and it generally hovers around 20 - 30GB of RAM use.
This is a node with all plugins enabled, using partial-operations=1, max-ops-per-account=300, max-order-his-records-per-market=300, history-per-size=300, no track-accounts specified, default list of buckets. I do not use the elastic search features.
"Top" utility says RAM use for the witness_node process is only 13% of RAM.
800 - 900 connections is insane! I have no restrictions in place that would limit the number of connections. Not sure how you get so many unless it is enabled by sitting right on a major Internet artery, as I know Dallas is one place where that is quite possible.
At this very moment the number of connections on that node is near 60, and RAM use is under 4% according to top. It also shows > 35GB of RAM available.
I run both hertz and btwty price feeds on that server as well, both of which require some history, both market and account history. It's highly likely that if one were to "max out" the options so as to retain all history RAM use would go thru the roof and exceed the 64GB available.
From a discussion in Telegram channel:
The text was updated successfully, but these errors were encountered: