-
Notifications
You must be signed in to change notification settings - Fork 17.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
crypto/tls: 500% increase in allocations from (*tls.Conn).Read
in go 1.17
#50657
Comments
(*tls.Context).Read
in go 1.17(*tls.Conn).Read
in go 1.17
As mentioned in the Gophers Slack discussion, it seems to me we could simply move the early exit checks above the context manipulation functions to remove these allocations in the case where the handshake has already completed. |
One thing I was trying to figure out when glancing over the code: can we just check |
I would expect that we'd need to do both, so that if there was an error during handshaking that is returned first. |
Huh, but in that case, isn't handshakeStatus still 0? I'm just reading this for the first time, so I might be missing something, but it looked like the intent of the atomic status was to avoid needing to acquire the mutex to check if handshake was completed or not? |
I'm not too intimately familiar with that specific logic, but judging by the existing logic flow we have to check the error before returning? |
Change https://golang.org/cl/379034 mentions this issue: |
CC @golang/security (sorry if this is not the right way to get the attention from someone more experienced with the code). |
I did a quick run comparing
By looking at 1.16 vs 90d6bbb -- the last commit to touch
Is this enough of a regression to justify a backport? |
Is there any update on this issue? |
(*tls.Conn).Read
in go 1.17(*tls.Conn).Read
in go 1.17
Thanks @dt and @FiloSottile for getting this in ❤️ |
@dt @FiloSottile what about 1.17 and 1.18 backport? |
I'm unclear as to whether or not the performance regression seen here qualifies for a backport under the backport policy's definition of a serious issue with no workaround:
I'll go ahead and open backport issues to continue the discussion there I guess? @gopherbot please consider this for backport to 1.17, as it is a regression compared to 1.16. @gopherbot please consider this for backport to 1.18, as it is a regression compared to 1.16. |
Backport issue(s) opened: #52790 (for 1.17), #52791 (for 1.18). Remember to create the cherry-pick CL(s) as soon as the patch is submitted to master, according to https://go.dev/wiki/MinorReleases. |
Change https://go.dev/cl/405544 mentions this issue: |
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
I believe so, but have not yet confirmed.
What operating system and processor architecture are you using (
go env
)?go env
OutputWhat did you do?
I'm running a service that has many http2 clients.
What did you expect to see?
Fewer allocations when reading data from established connections.
What did you see instead?
Around 20-25% of our service's total allocations at times are coming from
context.WithCancel
calls made by(*tls.Conn).handshakeContext
.It looks like every
(*tls.Context).Read
call made by http2'sReadFrame
is callingc.Handshake
which calls through toc.handshakeContext
, which begins with a call tocontext.WithCancel
, which allocates.The text was updated successfully, but these errors were encountered: