-
Notifications
You must be signed in to change notification settings - Fork 17.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
net/http: Transport memory leak #43966
Comments
Just to be sure: does the problem go away when you use a new transport every time? Could you provide a full working example? |
Thanks! I'll try to use a new transport for every time! But, the comments on "transport.go(line 68)" file says: |
I use a new transport for every request, and there is no memory leak. |
A minimal example for reproducing would still help, preferable on go playground. |
I would suggest to close due to missing feedback |
I confirm this issue. I am getting very high usage on a reverse proxy using the http library:
The solution for me was to create a single |
I just tracked down a similar leak. The root cause was forgetting to
I have a repro for my case attached. |
Had the same problem, middleware like this helped: func CloseBody(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
defer func() {
if r == nil || r.Body == nil {
return
}
if err := r.Body.Close(); err != nil {
fmt.Printf("closing body error: %+v\n", err)
}
}()
h.ServeHTTP(w, r)
})
} Maybe it will be helpful for people who came here with the same problem from google. So, it seems there is nothing wrong with |
I also encountered the same problem. There was a memory leak when using global http transport (even though I closed the resp body every time). here is pprof:http://img.aladdinding.cn/202304030947153.png |
@munding thank you for the update! |
i thought the server closes the response body by itself. this is only needed for client code. or am i wrong? i wonder why this should fix the problem. |
It is - https://cs.opensource.google/go/go/+/refs/tags/go1.20.5:src/net/http/request.go;l=179-180 |
Transport getConn creates wantConn w, tries to obtain idle connection for it based on the w.key and, when there is no idle connection, puts wantConn into idleConnWait wantConnQueue. Then getConn dials connection for w in a goroutine and blocks. After dial succeeds getConn unblocks and returns connection to the caller. At this point w is stored in the idleConnWait and will not be evicted until another wantConn with the same w.key is requested or alive connection returned into the idle pool which may not happen e.g. if server closes the connection. The problem is that even after tryDeliver succeeds w references persistConn wrapper that allocates bufio.Reader and bufio.Writer and prevents them from being garbage collected. To fix the problem this change removes persistConn and error references from wantConn and delivers them via channel to getConn. This way wantConn could be kept in wantConnQueues arbitrary long. Fixes golang#43966 Fixes golang#50798
Transport getConn creates wantConn w, tries to obtain idle connection for it based on the w.key and, when there is no idle connection, puts wantConn into idleConnWait wantConnQueue. Then getConn dials connection for w in a goroutine and blocks. After dial succeeds getConn unblocks and returns connection to the caller. At this point w is stored in the idleConnWait and will not be evicted until another wantConn with the same w.key is requested or alive connection returned into the idle pool which may not happen e.g. if server closes the connection. The problem is that even after tryDeliver succeeds w references persistConn wrapper that allocates bufio.Reader and bufio.Writer and prevents them from being garbage collected. To fix the problem this change removes persistConn and error references from wantConn and delivers them via channel to getConn. This way wantConn could be kept in wantConnQueues arbitrary long. Fixes golang#43966 Fixes golang#50798
Change https://go.dev/cl/522095 mentions this issue: |
Transport getConn creates wantConn w, tries to obtain idle connection for it based on the w.key and, when there is no idle connection, puts wantConn into idleConnWait wantConnQueue. Then getConn dials connection for w in a goroutine and blocks. After dial succeeds getConn unblocks and returns connection to the caller. At this point w is stored in the idleConnWait and will not be evicted until another wantConn with the same w.key is requested or alive connection returned into the idle pool which may not happen e.g. if server closes the connection. The problem is that even after tryDeliver succeeds w references persistConn wrapper that allocates bufio.Reader and bufio.Writer and prevents them from being garbage collected. To fix the problem this change removes persistConn and error references from wantConn and delivers them via channel to getConn. This way wantConn could be kept in wantConnQueues arbitrary long. Fixes golang#43966 Fixes golang#50798
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
1.15.7
What operating system and processor architecture are you using (
go env
)?go env
OutputWhat did you do?
I used a common "http.Transport" object for every “http.Client” HTTP request,After running for a long time, memory usage is getting higher and higher, Use pprof for debugging, as shown below:
my codes like this:
What did you expect to see?
What did you see instead?
The text was updated successfully, but these errors were encountered: