-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible Memory Leak in LFS push #13318
Comments
@lunny Yes, but in fact app memory consumption is still high. Can it be released back to OS ? |
As @lunny says I don't think your graphs show a memory leak at all. However the spike in memory consumption is concerning to me that the LFS stream is somehow being read entirely in to memory (temporarily) before being released. That's weird though because we definitely try not to do that - so the question is where is that occurring. Now I see that you're using 1.12.1 - it would be helpful is you could update to at least 1.12.5 as there was a problem with repo stats reading entire files in to memory in that version and then try again. |
@zeripath ok, I will update gitea version and try again |
@zeripath I updated gitea to 1.12.5 Before lfs push: After lfs push files with total summary size 40mb (like in original case): The situation is much better now. However, Heap Memory Obtained is still increasing (1.5x). Is it possible to release Heap Memory Idle ? And yes, it is still pretty high spike in memory consumption in relation to upload file size (40mb). |
In regards to relating heap memory that's something that you'd have to look at the golang tuning parameters. Garbage collected languages always have a delay between releasing memory and releasing heap. Go won't be using all that memory it's just there in case it needs it again. Now the real question is the cause of that spike - because that worries me that we're loading a lot of stuff directly in to memory - which even if temporary is still bad. The question is how to figure out what's doing that, where and how to stop it. We do have a pprof option the problem is I genuinely don't know how to use it or where to look to learn about how to use it. But if you do - it's there for this reason. |
I think this is likely related to the go-git memory leak problems and should now be fixed after the no-go-git PR @v-byte-cpu would it be possible to recheck on master? |
yes, sure. Do you have pre-built docker images for master branch ? |
Not sure if this is a separate issue or not but I'm seeing massive memory spikes and a consistent server crash on pushing a fairly large repo (2gb), seemingly at the end of the lfs upload. This is the latest version of gitea (1.13.2) in a docker container on ubuntu 18.04. Our gitea lfs is set to an external bucket (backblaze b2). Our instance has 8gb memory and 8gb storage, but running |
What version? Have you tried changing you password hashing algorithm from argon2 to a different algorithm? |
I mentioned it at the end, 1.13.2. I'll try that and get back to you, see if it makes a difference. |
OK I think we can probably close this - as @lunny says the likely issue here is the memory use of argon2 on password hashing if you are using http(s) pushing. I do expect that the no-go-git PR will also have helped. Therefore I'm going to close this. (Of note we will need to think carefully about argon2 and other hashing limits in future - perhaps with some maximal hashing pool or the like.) |
[x]
):Description
I fetched all lfs files from one repo (not in gitea) with
git lfs fetch --all
and pushed them to private gitea instance withgit lfs push --all
. First I allocated 500mb to gitea in container orchestrator, it was always enough for gitea that consumed only 128mb. I noticed that gitea was killed with OOM right after lfs push. Then I increased ram to 1gb,lfs push
succeeded but memory consumption significantly increased to 780mb.Screenshots
stats from orchestrator:
The text was updated successfully, but these errors were encountered: