-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
http2: delay H2 frame serialization if the network connection's output buffer high-watermark is triggered. #14714
Changes from all commits
9226ccd
e9de29e
6442136
52cff61
84897e3
d17f568
228b2d1
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -258,10 +258,22 @@ RequestDecoder& ConnectionManagerImpl::newStream(ResponseEncoder& response_encod | |
new_stream->response_encoder_ = &response_encoder; | ||
new_stream->response_encoder_->getStream().addCallbacks(*new_stream); | ||
new_stream->response_encoder_->getStream().setFlushTimeout(new_stream->idle_timeout_ms_); | ||
// If the network connection is backed up, the stream should be made aware of it on creation. | ||
// Both HTTP/1.x and HTTP/2 codecs handle this in StreamCallbackHelper::addCallbacksHelper. | ||
ASSERT(read_callbacks_->connection().aboveHighWatermark() == false || | ||
new_stream->filter_manager_.aboveHighWatermark()); | ||
|
||
// If the network connection is backed up, the HTTP/1.x stream should be made aware of it on | ||
// creation. In the case of HTTP/2 the stream should be allowed to read up to the configured | ||
// stream limit even when the network connection is backed up, so the readDisable status is not | ||
// propagated from the network connection if the | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I may get unconfused by the time I get to the bottom of this PR but What this used to do was inform new streams to bound their reads if the upstream connection was backed up. I'm not sure how delaying serialization should affect this - I'd think we'd still want to inform the downstream codec to stop sending window updates if upstream is at capacity. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, there's a difference in buffering behavior. I think flow control ends up operating based on the contents of the stream buffer and depend on readDisable when that buffer is above watermark to suppress window updates. I think that we could end up buffering more bytes total per connection, but we'ld end up with stream buffers which are 2x the high configured watermark instead of connection buffers with >100x high watermark bytes in them. There's some issues that I haven't managed to work out yet, like the configured buffer sizes for the downstream stream buffer affecting how much we download from the upstream over H1 or H2. The issue of H2 stream buffers now playing a role in cases where the other end of the HTTP handler is using H1 could cause some problems related to default values in the config: A downstream H2 connection without explicitly configured defaults or buffer limits would buffer 1MB in the connection's output buffer when proxying data from an H1 upstream. After this change, the buffering would be done based on the H2 stream window, so we'ld buffer 1MB in the connection's output buffer and another 256MB in the stream's buffer since that' the default. This behavior is illustrated by Http2BufferWatermarksTest.DataFlowControlHttp1Upstream We may want to consider keeping the runtime features associated with this change disabled by default and only consider enabling them after doing several followup fixes. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. one thing that might help this PR is updating flow_control.md to explain what's going on? Maybe that'd help make sure I grokked what you were aiming for and can match it up with what the PR does? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sorry for not replying until now. It's been hard for me to find time to continue doing the doc changes required for this. One thing that we may want to answer is wherever or not the runtime feature knob added in this PR should default to true or false. There are some undesirable behaviors related to buffering when one side of the pipeline is H1 and the other is H2, but the extra buffering does reduce changes for starvation. I need to write a proposal for changes to how H2 pushes the pipeline which would reduce buffering in several interesting cases and address several other issues. If we decide to hold back the runtime feature, should we also hold back documentation? I think we do want new documentation to be ready when we merge functionality with the flag off, commit to getting remaining issues sorted out in the short term and switch over to delayed serialization soon. |
||
// envoy.reloadable_features.enable_h2_watermark_improvements runtime feature is enabled. Both | ||
// HTTP/1.x and HTTP/2 codecs handle this in StreamCallbackHelper::addCallbacksHelper. | ||
// TODO(antoniovicente) For full consistency we need to use the enable_h2_watermark_improvements | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Don't you already do it with the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. h2_watermark_improvements_ is not used here. Accessing it here seems challenging. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oh yes, makes sense |
||
// latched by the H2 connection when it was created. Accessing the current value of the runtime | ||
// feature may trigger spurious ASSERT failures. | ||
ASSERT((Runtime::runtimeFeatureEnabled( | ||
"envoy.reloadable_features.enable_h2_watermark_improvements") && | ||
codec_->protocol() >= Protocol::Http2) | ||
? !new_stream->filter_manager_.aboveHighWatermark() | ||
: read_callbacks_->connection().aboveHighWatermark() == false || | ||
new_stream->filter_manager_.aboveHighWatermark()); | ||
LinkedList::moveIntoList(std::move(new_stream), streams_); | ||
return **streams_.begin(); | ||
} | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -28,6 +28,22 @@ namespace Envoy { | |
namespace Http { | ||
namespace Http2 { | ||
|
||
namespace { | ||
|
||
class Nghttp2Session : public Nghttp2SessionInterface { | ||
public: | ||
explicit Nghttp2Session(nghttp2_session& session) : session_(session) {} | ||
|
||
size_t getOutboundControlFrameQueueSize() const override { | ||
return nghttp2_session_get_outbound_queue_size(&session_); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think the DATA frame what caused NGHTTP2_ERR_WOULDBLOCK would be added to this queue as well. Probably not a big deal from the frame counting perspective. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think that the data frame that would have blocked is not queued internally. Fairly sure that the data portion of the frame is not added to the queue given the way data frame serialization in ConnectionImpl::StreamImpl::onDataSourceSend; the contents of the data frame is directly moved from the stream buffer to the connection's output buffer. nghttp2 is not involved in that part of the serialization. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok, either way it is not a big deal. |
||
} | ||
|
||
private: | ||
nghttp2_session& session_; | ||
}; | ||
|
||
} // namespace | ||
|
||
// Changes or additions to details should be reflected in | ||
// docs/root/configuration/http/http_conn_man/response_code_details_details.rst | ||
class Http2ResponseCodeDetailValues { | ||
|
@@ -116,13 +132,22 @@ template <typename T> static T* removeConst(const void* object) { | |
} | ||
|
||
ConnectionImpl::StreamImpl::StreamImpl(ConnectionImpl& parent, uint32_t buffer_limit) | ||
: parent_(parent), local_end_stream_sent_(false), remote_end_stream_(false), | ||
data_deferred_(false), received_noninformational_headers_(false), | ||
: parent_(parent), | ||
pending_recv_data_(parent_.connection_.dispatcher().getWatermarkFactory().create( | ||
[this]() -> void { this->pendingRecvBufferLowWatermark(); }, | ||
[this]() -> void { this->pendingRecvBufferHighWatermark(); }, | ||
[]() -> void { /* TODO(adisuissa): Handle overflow watermark */ })), | ||
pending_send_data_(parent_.connection_.dispatcher().getWatermarkFactory().create( | ||
[this]() -> void { this->pendingSendBufferLowWatermark(); }, | ||
[this]() -> void { this->pendingSendBufferHighWatermark(); }, | ||
[]() -> void { /* TODO(adisuissa): Handle overflow watermark */ })), | ||
local_end_stream_sent_(false), remote_end_stream_(false), data_deferred_(false), | ||
received_noninformational_headers_(false), | ||
pending_receive_buffer_high_watermark_called_(false), | ||
pending_send_buffer_high_watermark_called_(false), reset_due_to_messaging_error_(false) { | ||
parent_.stats_.streams_active_.inc(); | ||
if (buffer_limit > 0) { | ||
setWriteBufferWatermarks(buffer_limit / 2, buffer_limit); | ||
setWriteBufferWatermarks(buffer_limit); | ||
} | ||
} | ||
|
||
|
@@ -131,7 +156,7 @@ ConnectionImpl::StreamImpl::~StreamImpl() { ASSERT(stream_idle_timer_ == nullptr | |
void ConnectionImpl::StreamImpl::destroy() { | ||
disarmStreamIdleTimer(); | ||
parent_.stats_.streams_active_.dec(); | ||
parent_.stats_.pending_send_bytes_.sub(pending_send_data_.length()); | ||
parent_.stats_.pending_send_bytes_.sub(pending_send_data_->length()); | ||
} | ||
|
||
static void insertHeader(std::vector<nghttp2_nv>& headers, const HeaderEntry& header) { | ||
|
@@ -239,7 +264,7 @@ void ConnectionImpl::ServerStreamImpl::encodeHeaders(const ResponseHeaderMap& he | |
void ConnectionImpl::StreamImpl::encodeTrailersBase(const HeaderMap& trailers) { | ||
ASSERT(!local_end_stream_); | ||
local_end_stream_ = true; | ||
if (pending_send_data_.length() > 0) { | ||
if (pending_send_data_->length() > 0) { | ||
// In this case we want trailers to come after we release all pending body data that is | ||
// waiting on window updates. We need to save the trailers so that we can emit them later. | ||
// However, for empty trailers, we don't need to to save the trailers. | ||
|
@@ -395,13 +420,13 @@ void ConnectionImpl::StreamImpl::submitMetadata(uint8_t flags) { | |
} | ||
|
||
ssize_t ConnectionImpl::StreamImpl::onDataSourceRead(uint64_t length, uint32_t* data_flags) { | ||
if (pending_send_data_.length() == 0 && !local_end_stream_) { | ||
if (pending_send_data_->length() == 0 && !local_end_stream_) { | ||
ASSERT(!data_deferred_); | ||
data_deferred_ = true; | ||
return NGHTTP2_ERR_DEFERRED; | ||
} else { | ||
*data_flags |= NGHTTP2_DATA_FLAG_NO_COPY; | ||
if (local_end_stream_ && pending_send_data_.length() <= length) { | ||
if (local_end_stream_ && pending_send_data_->length() <= length) { | ||
*data_flags |= NGHTTP2_DATA_FLAG_EOF; | ||
if (pending_trailers_to_encode_) { | ||
// We need to tell the library to not set end stream so that we can emit the trailers. | ||
|
@@ -411,29 +436,38 @@ ssize_t ConnectionImpl::StreamImpl::onDataSourceRead(uint64_t length, uint32_t* | |
} | ||
} | ||
|
||
return std::min(length, pending_send_data_.length()); | ||
return std::min(length, pending_send_data_->length()); | ||
} | ||
} | ||
|
||
void ConnectionImpl::StreamImpl::onDataSourceSend(const uint8_t* framehd, size_t length) { | ||
ssize_t ConnectionImpl::StreamImpl::onDataSourceSend(const uint8_t* framehd, size_t length) { | ||
// In this callback we are writing out a raw DATA frame without copying. nghttp2 assumes that we | ||
// "just know" that the frame header is 9 bytes. | ||
// https://nghttp2.org/documentation/types.html#c.nghttp2_send_data_callback | ||
static const uint64_t FRAME_HEADER_SIZE = 9; | ||
|
||
if (parent_.h2_watermark_improvements_ && length > 0 && | ||
parent_.connection_.aboveHighWatermark()) { | ||
// The network connection's output buffer is full. Delay generation of the data frame until the | ||
// buffer's size drops to its low-watermark. | ||
return NGHTTP2_ERR_WOULDBLOCK; | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This will cause control frames to be accumulated inside nghttp2. I could not see a stat counter that tracks how many bytes or frames are pending in nghttp2. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do we have stats counters that track number of frames pending in the current codec implementation? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Were we looking for stats, or just to make sure our current frame flood flame logic was unified so network connection frames and nghttp2 frames both count towards the flood limits? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I was agreeing with Yan that no such stats counters exist. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I guess my question is if we're losing some of our flooding protection then? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We don't enforce a byte limit on response headers. What we enforce is a count on number of outstanding control frames. The limit on outstanding control frames is still enforced correctly; we sum the number of frames that are serialized and the number of frames internally queued by nghttp2. So I think we're not losing any flood protections. Http2BufferWatermarksTest.PingFloodAfterHighWatermark provides test coverage. We could add a similar test with header frames. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I do no think we loose any flood protection. The frames that are accumulated by nghttp2 are accounted for using the |
||
} | ||
|
||
parent_.protocol_constraints_.incrementOutboundDataFrameCount(); | ||
|
||
Buffer::OwnedImpl output; | ||
parent_.addOutboundFrameFragment(output, framehd, FRAME_HEADER_SIZE); | ||
if (!parent_.protocol_constraints_.checkOutboundFrameLimits().ok()) { | ||
if (!parent_.protocol_constraints_.checkOutboundFrameLimits(Nghttp2Session(*parent_.session_)) | ||
.ok()) { | ||
ENVOY_CONN_LOG(debug, "error sending data frame: Too many frames in the outbound queue", | ||
parent_.connection_); | ||
setDetails(Http2ResponseCodeDetails::get().outbound_frame_flood); | ||
} | ||
|
||
parent_.stats_.pending_send_bytes_.sub(length); | ||
output.move(pending_send_data_, length); | ||
output.move(*pending_send_data_, length); | ||
parent_.connection_.write(output, false); | ||
return 0; | ||
} | ||
|
||
void ConnectionImpl::ClientStreamImpl::submitHeaders(const std::vector<nghttp2_nv>& final_headers, | ||
|
@@ -488,7 +522,7 @@ void ConnectionImpl::StreamImpl::encodeDataHelper(Buffer::Instance& data, bool e | |
|
||
local_end_stream_ = end_stream; | ||
parent_.stats_.pending_send_bytes_.add(data.length()); | ||
pending_send_data_.move(data); | ||
pending_send_data_->move(data); | ||
if (data_deferred_) { | ||
int rc = nghttp2_session_resume_data(parent_.session_, stream_id_); | ||
ASSERT(rc == 0); | ||
|
@@ -500,7 +534,7 @@ void ConnectionImpl::StreamImpl::encodeDataHelper(Buffer::Instance& data, bool e | |
// Intended to check through coverage that this error case is tested | ||
return; | ||
} | ||
if (local_end_stream_ && pending_send_data_.length() > 0) { | ||
if (local_end_stream_ && pending_send_data_->length() > 0) { | ||
createPendingFlushTimer(); | ||
} | ||
} | ||
|
@@ -568,7 +602,11 @@ ConnectionImpl::ConnectionImpl(Network::Connection& connection, CodecStats& stat | |
protocol_constraints_(stats, http2_options), | ||
skip_encoding_empty_trailers_(Runtime::runtimeFeatureEnabled( | ||
"envoy.reloadable_features.http2_skip_encoding_empty_trailers")), | ||
h2_watermark_improvements_(Runtime::runtimeFeatureEnabled( | ||
"envoy.reloadable_features.enable_h2_watermark_improvements")), | ||
dispatching_(false), raised_goaway_(false), pending_deferred_reset_(false), | ||
send_pending_frames_cb_(connection_.dispatcher().createSchedulableCallback( | ||
[this]() -> void { sendPendingFramesAndHandleError(); })), | ||
random_(random_generator) { | ||
if (http2_options.has_connection_keepalive()) { | ||
keepalive_interval_ = std::chrono::milliseconds( | ||
|
@@ -681,9 +719,11 @@ ConnectionImpl::StreamImpl* ConnectionImpl::getStream(int32_t stream_id) { | |
|
||
int ConnectionImpl::onData(int32_t stream_id, const uint8_t* data, size_t len) { | ||
StreamImpl* stream = getStream(stream_id); | ||
// onData callback only triggers if the stream is still alive when the data arrives. | ||
ASSERT(stream != nullptr); | ||
// If this results in buffering too much data, the watermark buffer will call | ||
// pendingRecvBufferHighWatermark, resulting in ++read_disable_count_ | ||
stream->pending_recv_data_.add(data, len); | ||
stream->pending_recv_data_->add(data, len); | ||
// Update the window to the peer unless some consumer of this stream's data has hit a flow control | ||
// limit and disabled reads on this stream | ||
if (!stream->buffersOverrun()) { | ||
|
@@ -838,10 +878,10 @@ Status ConnectionImpl::onFrameReceived(const nghttp2_frame* frame) { | |
// It's possible that we are waiting to send a deferred reset, so only raise data if local | ||
// is not complete. | ||
if (!stream->deferred_reset_) { | ||
stream->decoder().decodeData(stream->pending_recv_data_, stream->remote_end_stream_); | ||
stream->decoder().decodeData(*stream->pending_recv_data_, stream->remote_end_stream_); | ||
} | ||
|
||
stream->pending_recv_data_.drain(stream->pending_recv_data_.length()); | ||
stream->pending_recv_data_->drain(stream->pending_recv_data_->length()); | ||
break; | ||
} | ||
case NGHTTP2_RST_STREAM: { | ||
|
@@ -887,7 +927,14 @@ int ConnectionImpl::onFrameSend(const nghttp2_frame* frame) { | |
case NGHTTP2_HEADERS: | ||
case NGHTTP2_DATA: { | ||
StreamImpl* stream = getStream(frame->hd.stream_id); | ||
stream->local_end_stream_sent_ = frame->hd.flags & NGHTTP2_FLAG_END_STREAM; | ||
// Check if stream is still active before recording flag state. onFrameSend callbacks runs once | ||
// the frame is serialized to the output buffer; queued frames can be sent after the associated | ||
// stream has been terminated. | ||
// TODO(antoniovicente) Is this delay recording local_end_stream_sent_ when writes | ||
// NGHTTP2_ERR_WOULDBLOCK acceptable? | ||
if (stream != nullptr) { | ||
stream->local_end_stream_sent_ = frame->hd.flags & NGHTTP2_FLAG_END_STREAM; | ||
} | ||
break; | ||
} | ||
} | ||
|
@@ -953,6 +1000,12 @@ void ConnectionImpl::addOutboundFrameFragment(Buffer::OwnedImpl& output, const u | |
|
||
ssize_t ConnectionImpl::onSend(const uint8_t* data, size_t length) { | ||
ENVOY_CONN_LOG(trace, "send data: bytes={}", connection_, length); | ||
if (h2_watermark_improvements_ && connection_.aboveHighWatermark()) { | ||
// The network connection's output buffer is full. Return NGHTTP2_ERR_WOULDBLOCK so nghttp2 | ||
// queues the control frame until the buffer's size drops to its low-watermark. | ||
return NGHTTP2_ERR_WOULDBLOCK; | ||
} | ||
|
||
Buffer::OwnedImpl buffer; | ||
addOutboundFrameFragment(buffer, data, length); | ||
|
||
|
@@ -1119,7 +1172,7 @@ Status ConnectionImpl::sendPendingFrames() { | |
|
||
// After all pending frames have been written into the outbound buffer check if any of | ||
// protocol constraints had been violated. | ||
Status status = protocol_constraints_.checkOutboundFrameLimits(); | ||
Status status = protocol_constraints_.checkOutboundFrameLimits(Nghttp2Session(*session_)); | ||
if (!status.ok()) { | ||
ENVOY_CONN_LOG(debug, "error sending frames: Too many frames in the outbound queue.", | ||
connection_); | ||
|
@@ -1232,8 +1285,7 @@ ConnectionImpl::Http2Callbacks::Http2Callbacks() { | |
[](nghttp2_session*, nghttp2_frame* frame, const uint8_t* framehd, size_t length, | ||
nghttp2_data_source* source, void*) -> int { | ||
ASSERT(frame->data.padlen == 0); | ||
static_cast<StreamImpl*>(source->ptr)->onDataSourceSend(framehd, length); | ||
return 0; | ||
return static_cast<StreamImpl*>(source->ptr)->onDataSourceSend(framehd, length); | ||
}); | ||
|
||
nghttp2_session_callbacks_set_on_begin_headers_callback( | ||
|
@@ -1364,13 +1416,9 @@ ConnectionImpl::Http2Options::Http2Options( | |
ENVOY_LOG(trace, "Codec does not have Metadata frame support."); | ||
} | ||
|
||
// nghttp2 v1.39.2 lowered the internal flood protection limit from 10K to 1K of ACK frames. | ||
// This new limit may cause the internal nghttp2 mitigation to trigger more often (as it | ||
// requires just 9K of incoming bytes for smallest 9 byte SETTINGS frame), bypassing the same | ||
// mitigation and its associated behavior in the envoy HTTP/2 codec. Since envoy does not rely | ||
// on this mitigation, set back to the old 10K number to avoid any changes in the HTTP/2 codec | ||
// behavior. | ||
nghttp2_option_set_max_outbound_ack(options_, 10000); | ||
// Envoy implements flood protection on its own. Effectively disable nghttp2 outbound queue flood | ||
// protections by setting the limit to a large number. | ||
nghttp2_option_set_max_outbound_ack(options_, std::numeric_limits<size_t>::max()); | ||
} | ||
|
||
ConnectionImpl::Http2Options::~Http2Options() { nghttp2_option_del(options_); } | ||
|
@@ -1407,7 +1455,7 @@ RequestEncoder& ClientConnectionImpl::newStream(ResponseDecoder& decoder) { | |
// If the connection is currently above the high watermark, make sure to inform the new stream. | ||
// The connection can not pass this on automatically as it has no awareness that a new stream is | ||
// created. | ||
if (connection_.aboveHighWatermark()) { | ||
if (!h2_watermark_improvements_ && connection_.aboveHighWatermark()) { | ||
stream->runHighWatermarkCallbacks(); | ||
} | ||
ClientStreamImpl& stream_ref = *stream; | ||
|
@@ -1424,6 +1472,8 @@ Status ClientConnectionImpl::onBeginHeaders(const nghttp2_frame* frame) { | |
RETURN_IF_ERROR(trackInboundFrames(&frame->hd, frame->headers.padlen)); | ||
if (frame->headers.cat == NGHTTP2_HCAT_HEADERS) { | ||
StreamImpl* stream = getStream(frame->hd.stream_id); | ||
// Header frames are discarded if the stream is no longer around when the frame arrives. | ||
ASSERT(stream != nullptr); | ||
stream->allocTrailers(); | ||
} | ||
|
||
|
@@ -1500,12 +1550,14 @@ Status ServerConnectionImpl::onBeginHeaders(const nghttp2_frame* frame) { | |
ASSERT(frame->headers.cat == NGHTTP2_HCAT_HEADERS); | ||
|
||
StreamImpl* stream = getStream(frame->hd.stream_id); | ||
// Header frames are discarded if the stream is no longer around when the frame arrives. | ||
ASSERT(stream != nullptr); | ||
stream->allocTrailers(); | ||
return okStatus(); | ||
} | ||
|
||
ServerStreamImplPtr stream(new ServerStreamImpl(*this, per_stream_buffer_limit_)); | ||
if (connection_.aboveHighWatermark()) { | ||
if (!h2_watermark_improvements_ && connection_.aboveHighWatermark()) { | ||
stream->runHighWatermarkCallbacks(); | ||
} | ||
stream->request_decoder_ = &callbacks_.newStream(*stream); | ||
|
@@ -1560,7 +1612,7 @@ Http::Status ServerConnectionImpl::dispatch(Buffer::Instance& data) { | |
|
||
Http::Status ServerConnectionImpl::innerDispatch(Buffer::Instance& data) { | ||
// Make sure downstream outbound queue was not flooded by the upstream frames. | ||
RETURN_IF_ERROR(protocol_constraints_.checkOutboundFrameLimits()); | ||
RETURN_IF_ERROR(protocol_constraints_.checkOutboundFrameLimits(Nghttp2Session(*session_))); | ||
return ConnectionImpl::innerDispatch(data); | ||
} | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"HTTP/2 and above"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know how this code would work for HTTP/3
I'm not changing that implementation, it's possible that the ASSERT below will trigger for HTTP/3, we don't seem to have tests that create streams while watermarks are triggered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your code is
"use new watermark logic if codec_->protocol() >= Protocol::Http2)
If you don't know the impact on HTPT/3 maybe you should have
codec_->protocol() == Protocol::Http2?