You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We should write some tests with a multiple connections to a server with multiple streams. This will ensure help us be sure that this and future such optimizations don't put the code in a bad state.
The text was updated successfully, but these errors were encountered:
Related, I noticed today that the gRPC flushing behavior when responding to a unary RPC creates 3 packets where it seems only 1 is necessary. These 3 packets are created by 3 separate flushes in the grpc.Server.processUnaryRPC path:
http2Server.Write calls http2Server.writeHeaders which always forces a flush after writing the header.
http2Server.Write specifies forceFlush=true on the last call to writeData and it also directly calls flushWrite if this is the last writer.
Server.processUnaryRPC calls http2Server.WriteStatus which calls writeHeaders and forces a flush.
Given that a unary RPC will always call WriteStatus, can we eliminate the forced flushes in http2Server.Write? I hacked something up to do this, and it reduced 99%-tile latencies in a simple test harness from 1.9ms to 1.1ms and 95%-tile latencies from 1.3ms to 0.9ms.
This reduces the number of packets sent when responding to a unary RPC
from 3 to 1. This reduction in packets improves latency, presumably by
the lower syscall and packet overhead.
See grpc#1256
Write() method in both http2_client and http2_server writes data out in 16K chunks but for every chunk does a force flush. This seems to be an oversight. Removing it will save us a lot of flush system calls.
https://github.com/grpc/grpc-go/blob/master/transport/http2_client.go#L785
We should write some tests with a multiple connections to a server with multiple streams. This will ensure help us be sure that this and future such optimizations don't put the code in a bad state.
The text was updated successfully, but these errors were encountered: