Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

http2: delay H2 frame serialization if the network connection's output buffer high-watermark is triggered. #14714

Closed
wants to merge 7 commits into from
6 changes: 6 additions & 0 deletions docs/root/version_history/current.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,12 @@ Minor Behavior Changes
----------------------
*Changes that may cause incompatibilities for some users, but should not for most*

* http2: delay serialization of control and data frames if the network connection's output buffer
has triggered its fullness high-watermark. By avoiding eager serialization the H2 codec
implementation is able to prioritize more important frames so they are sent sooner and provide
stricter bounds on connection and stream buffers. This change can be disabled by setting the
`envoy.reloadable_features.enable_h2_watermark_improvements` feature flag to false. If setting
this flag to false is required in a deployment please open an issue against the project.
* tcp: setting NODELAY in the base connection class. This should have no effect for TCP or HTTP proxying, but may improve throughput in other areas. This behavior can be temporarily reverted by setting `envoy.reloadable_features.always_nodelay` to false.
* upstream: host weight changes now cause a full load balancer rebuild as opposed to happening
atomically inline. This change has been made to support load balancer pre-computation of data
Expand Down
2 changes: 1 addition & 1 deletion source/common/buffer/watermark_buffer.h
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ class WatermarkBuffer : public OwnedImpl {

protected:
virtual void checkHighAndOverflowWatermarks();
void checkLowWatermark();
virtual void checkLowWatermark();

private:
std::function<void()> below_low_watermark_;
Expand Down
3 changes: 2 additions & 1 deletion source/common/event/dispatcher_impl.cc
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,8 @@ TimerPtr DispatcherImpl::createTimer(TimerCb cb) {
}

Event::SchedulableCallbackPtr DispatcherImpl::createSchedulableCallback(std::function<void()> cb) {
ASSERT(isThreadSafe());
// TODO(antoniovicente) Merge in https://github.com/envoyproxy/envoy/pull/14526 and restore ASSERT
// ASSERT(isThreadSafe());
return base_scheduler_.createSchedulableCallback([this, cb]() {
touchWatchdog();
cb();
Expand Down
20 changes: 16 additions & 4 deletions source/common/http/conn_manager_impl.cc
Original file line number Diff line number Diff line change
Expand Up @@ -257,10 +257,22 @@ RequestDecoder& ConnectionManagerImpl::newStream(ResponseEncoder& response_encod
new_stream->response_encoder_ = &response_encoder;
new_stream->response_encoder_->getStream().addCallbacks(*new_stream);
new_stream->response_encoder_->getStream().setFlushTimeout(new_stream->idle_timeout_ms_);
// If the network connection is backed up, the stream should be made aware of it on creation.
// Both HTTP/1.x and HTTP/2 codecs handle this in StreamCallbackHelper::addCallbacksHelper.
ASSERT(read_callbacks_->connection().aboveHighWatermark() == false ||
new_stream->filter_manager_.aboveHighWatermark());

// If the network connection is backed up, the HTTP/1.x stream should be made aware of it on
// creation. In the case of HTTP/2 the stream should be allowed to read up to the configured
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"HTTP/2 and above"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know how this code would work for HTTP/3

I'm not changing that implementation, it's possible that the ASSERT below will trigger for HTTP/3, we don't seem to have tests that create streams while watermarks are triggered.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your code is
"use new watermark logic if codec_->protocol() >= Protocol::Http2)
If you don't know the impact on HTPT/3 maybe you should have
codec_->protocol() == Protocol::Http2?

// stream limit even when the network connection is backed up, so the readDisable status is not
// propagated from the network connection if the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may get unconfused by the time I get to the bottom of this PR but

What this used to do was inform new streams to bound their reads if the upstream connection was backed up.
Read disabling for HTTP/2 didn't stop it from reading a window's worth of data, it immediately stopped it from sending window updates to ensure that only one windows's worth of data would be read.

I'm not sure how delaying serialization should affect this - I'd think we'd still want to inform the downstream codec to stop sending window updates if upstream is at capacity.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, there's a difference in buffering behavior. I think flow control ends up operating based on the contents of the stream buffer and depend on readDisable when that buffer is above watermark to suppress window updates. I think that we could end up buffering more bytes total per connection, but we'ld end up with stream buffers which are 2x the high configured watermark instead of connection buffers with >100x high watermark bytes in them.

There's some issues that I haven't managed to work out yet, like the configured buffer sizes for the downstream stream buffer affecting how much we download from the upstream over H1 or H2. The issue of H2 stream buffers now playing a role in cases where the other end of the HTTP handler is using H1 could cause some problems related to default values in the config: A downstream H2 connection without explicitly configured defaults or buffer limits would buffer 1MB in the connection's output buffer when proxying data from an H1 upstream. After this change, the buffering would be done based on the H2 stream window, so we'ld buffer 1MB in the connection's output buffer and another 256MB in the stream's buffer since that' the default. This behavior is illustrated by Http2BufferWatermarksTest.DataFlowControlHttp1Upstream

We may want to consider keeping the runtime features associated with this change disabled by default and only consider enabling them after doing several followup fixes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one thing that might help this PR is updating flow_control.md to explain what's going on? Maybe that'd help make sure I grokked what you were aiming for and can match it up with what the PR does?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for not replying until now. It's been hard for me to find time to continue doing the doc changes required for this.

One thing that we may want to answer is wherever or not the runtime feature knob added in this PR should default to true or false. There are some undesirable behaviors related to buffering when one side of the pipeline is H1 and the other is H2, but the extra buffering does reduce changes for starvation. I need to write a proposal for changes to how H2 pushes the pipeline which would reduce buffering in several interesting cases and address several other issues. If we decide to hold back the runtime feature, should we also hold back documentation? I think we do want new documentation to be ready when we merge functionality with the flag off, commit to getting remaining issues sorted out in the short term and switch over to delayed serialization soon.

// envoy.reloadable_features.enable_h2_watermark_improvements runtime feature is enabled. Both
// HTTP/1.x and HTTP/2 codecs handle this in StreamCallbackHelper::addCallbacksHelper.
// TODO(antoniovicente) For full consistency we need to use the enable_h2_watermark_improvements
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't you already do it with the ConnectionImpl::h2_watermark_improvements_ ?

Copy link
Contributor Author

@antoniovicente antoniovicente Jan 19, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

h2_watermark_improvements_ is not used here. Accessing it here seems challenging.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yes, makes sense

// latched by the H2 connection when it was created. Accessing the current value of the runtime
// feature may trigger spurious ASSERT failures.
ASSERT((Runtime::runtimeFeatureEnabled(
"envoy.reloadable_features.enable_h2_watermark_improvements") &&
codec_->protocol() >= Protocol::Http2)
? !new_stream->filter_manager_.aboveHighWatermark()
: read_callbacks_->connection().aboveHighWatermark() == false ||
new_stream->filter_manager_.aboveHighWatermark());
LinkedList::moveIntoList(std::move(new_stream), streams_);
return **streams_.begin();
}
Expand Down
102 changes: 79 additions & 23 deletions source/common/http/http2/codec_impl.cc
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,22 @@ namespace Envoy {
namespace Http {
namespace Http2 {

namespace {

class Nghttp2Session : public Nghttp2SessionInterface {
public:
explicit Nghttp2Session(nghttp2_session& session) : session_(session) {}

size_t getOutboundControlFrameQueueSize() const override {
return nghttp2_session_get_outbound_queue_size(&session_);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the DATA frame what caused NGHTTP2_ERR_WOULDBLOCK would be added to this queue as well. Probably not a big deal from the frame counting perspective.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that the data frame that would have blocked is not queued internally. Fairly sure that the data portion of the frame is not added to the queue given the way data frame serialization in ConnectionImpl::StreamImpl::onDataSourceSend; the contents of the data frame is directly moved from the stream buffer to the connection's output buffer. nghttp2 is not involved in that part of the serialization.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, either way it is not a big deal.

}

private:
nghttp2_session& session_;
};

} // namespace

// Changes or additions to details should be reflected in
// docs/root/configuration/http/http_conn_man/response_code_details_details.rst
class Http2ResponseCodeDetailValues {
Expand Down Expand Up @@ -116,13 +132,22 @@ template <typename T> static T* removeConst(const void* object) {
}

ConnectionImpl::StreamImpl::StreamImpl(ConnectionImpl& parent, uint32_t buffer_limit)
: parent_(parent), local_end_stream_sent_(false), remote_end_stream_(false),
data_deferred_(false), received_noninformational_headers_(false),
: parent_(parent),
pending_recv_data_(parent_.connection_.dispatcher().getWatermarkFactory().create(
[this]() -> void { this->pendingRecvBufferLowWatermark(); },
[this]() -> void { this->pendingRecvBufferHighWatermark(); },
[]() -> void { /* TODO(adisuissa): Handle overflow watermark */ })),
pending_send_data_(parent_.connection_.dispatcher().getWatermarkFactory().create(
[this]() -> void { this->pendingSendBufferLowWatermark(); },
[this]() -> void { this->pendingSendBufferHighWatermark(); },
[]() -> void { /* TODO(adisuissa): Handle overflow watermark */ })),
local_end_stream_sent_(false), remote_end_stream_(false), data_deferred_(false),
received_noninformational_headers_(false),
pending_receive_buffer_high_watermark_called_(false),
pending_send_buffer_high_watermark_called_(false), reset_due_to_messaging_error_(false) {
parent_.stats_.streams_active_.inc();
if (buffer_limit > 0) {
setWriteBufferWatermarks(buffer_limit / 2, buffer_limit);
setWriteBufferWatermarks(buffer_limit);
}
}

Expand All @@ -131,7 +156,7 @@ ConnectionImpl::StreamImpl::~StreamImpl() { ASSERT(stream_idle_timer_ == nullptr
void ConnectionImpl::StreamImpl::destroy() {
disarmStreamIdleTimer();
parent_.stats_.streams_active_.dec();
parent_.stats_.pending_send_bytes_.sub(pending_send_data_.length());
parent_.stats_.pending_send_bytes_.sub(pending_send_data_->length());
}

static void insertHeader(std::vector<nghttp2_nv>& headers, const HeaderEntry& header) {
Expand Down Expand Up @@ -239,7 +264,7 @@ void ConnectionImpl::ServerStreamImpl::encodeHeaders(const ResponseHeaderMap& he
void ConnectionImpl::StreamImpl::encodeTrailersBase(const HeaderMap& trailers) {
ASSERT(!local_end_stream_);
local_end_stream_ = true;
if (pending_send_data_.length() > 0) {
if (pending_send_data_->length() > 0) {
// In this case we want trailers to come after we release all pending body data that is
// waiting on window updates. We need to save the trailers so that we can emit them later.
// However, for empty trailers, we don't need to to save the trailers.
Expand Down Expand Up @@ -395,13 +420,13 @@ void ConnectionImpl::StreamImpl::submitMetadata(uint8_t flags) {
}

ssize_t ConnectionImpl::StreamImpl::onDataSourceRead(uint64_t length, uint32_t* data_flags) {
if (pending_send_data_.length() == 0 && !local_end_stream_) {
if (pending_send_data_->length() == 0 && !local_end_stream_) {
ASSERT(!data_deferred_);
data_deferred_ = true;
return NGHTTP2_ERR_DEFERRED;
} else {
*data_flags |= NGHTTP2_DATA_FLAG_NO_COPY;
if (local_end_stream_ && pending_send_data_.length() <= length) {
if (local_end_stream_ && pending_send_data_->length() <= length) {
*data_flags |= NGHTTP2_DATA_FLAG_EOF;
if (pending_trailers_to_encode_) {
// We need to tell the library to not set end stream so that we can emit the trailers.
Expand All @@ -411,29 +436,38 @@ ssize_t ConnectionImpl::StreamImpl::onDataSourceRead(uint64_t length, uint32_t*
}
}

return std::min(length, pending_send_data_.length());
return std::min(length, pending_send_data_->length());
}
}

void ConnectionImpl::StreamImpl::onDataSourceSend(const uint8_t* framehd, size_t length) {
ssize_t ConnectionImpl::StreamImpl::onDataSourceSend(const uint8_t* framehd, size_t length) {
// In this callback we are writing out a raw DATA frame without copying. nghttp2 assumes that we
// "just know" that the frame header is 9 bytes.
// https://nghttp2.org/documentation/types.html#c.nghttp2_send_data_callback
static const uint64_t FRAME_HEADER_SIZE = 9;

if (parent_.h2_watermark_improvements_ && length > 0 &&
parent_.connection_.aboveHighWatermark()) {
// The network connection's output buffer is full. Delay generation of the data frame until the
// buffer's size drops to its low-watermark.
return NGHTTP2_ERR_WOULDBLOCK;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will cause control frames to be accumulated inside nghttp2. I could not see a stat counter that tracks how many bytes or frames are pending in nghttp2.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have stats counters that track number of frames pending in the current codec implementation?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Were we looking for stats, or just to make sure our current frame flood flame logic was unified so network connection frames and nghttp2 frames both count towards the flood limits?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was agreeing with Yan that no such stats counters exist.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess my question is if we're losing some of our flooding protection then?
We were using the insta-serialization into data frames to avoid a reflection time OOM attack where someone sent a bunch of requests which would 400 faster than the network drained, queuing outbound header frames. If I understand this PR correctly those header frames will be buffered by nghttp2 and we won't account for them. Doesn't that reopen that attack vector?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't enforce a byte limit on response headers. What we enforce is a count on number of outstanding control frames. The limit on outstanding control frames is still enforced correctly; we sum the number of frames that are serialized and the number of frames internally queued by nghttp2.

So I think we're not losing any flood protections. Http2BufferWatermarksTest.PingFloodAfterHighWatermark provides test coverage. We could add a similar test with header frames.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do no think we loose any flood protection. The frames that are accumulated by nghttp2 are accounted for using the Nghttp2Session::getOutboundControlFrameQueueSize() method when making flood checks.
I was concerned about the visibility into the size of the queue maintained by nghttp2. We do have counters for buffer sizes (not number of frames) and now we will have some number of bytes sitting in the nghttp2 queue without them being reflected in any stats.

}

parent_.protocol_constraints_.incrementOutboundDataFrameCount();

Buffer::OwnedImpl output;
parent_.addOutboundFrameFragment(output, framehd, FRAME_HEADER_SIZE);
if (!parent_.protocol_constraints_.checkOutboundFrameLimits().ok()) {
if (!parent_.protocol_constraints_.checkOutboundFrameLimits(Nghttp2Session(*parent_.session_))
.ok()) {
ENVOY_CONN_LOG(debug, "error sending data frame: Too many frames in the outbound queue",
parent_.connection_);
setDetails(Http2ResponseCodeDetails::get().outbound_frame_flood);
}

parent_.stats_.pending_send_bytes_.sub(length);
output.move(pending_send_data_, length);
output.move(*pending_send_data_, length);
parent_.connection_.write(output, false);
return 0;
}

void ConnectionImpl::ClientStreamImpl::submitHeaders(const std::vector<nghttp2_nv>& final_headers,
Expand Down Expand Up @@ -488,7 +522,7 @@ void ConnectionImpl::StreamImpl::encodeDataHelper(Buffer::Instance& data, bool e

local_end_stream_ = end_stream;
parent_.stats_.pending_send_bytes_.add(data.length());
pending_send_data_.move(data);
pending_send_data_->move(data);
if (data_deferred_) {
int rc = nghttp2_session_resume_data(parent_.session_, stream_id_);
ASSERT(rc == 0);
Expand All @@ -500,7 +534,7 @@ void ConnectionImpl::StreamImpl::encodeDataHelper(Buffer::Instance& data, bool e
// Intended to check through coverage that this error case is tested
return;
}
if (local_end_stream_ && pending_send_data_.length() > 0) {
if (local_end_stream_ && pending_send_data_->length() > 0) {
createPendingFlushTimer();
}
}
Expand Down Expand Up @@ -568,7 +602,11 @@ ConnectionImpl::ConnectionImpl(Network::Connection& connection, CodecStats& stat
protocol_constraints_(stats, http2_options),
skip_encoding_empty_trailers_(Runtime::runtimeFeatureEnabled(
"envoy.reloadable_features.http2_skip_encoding_empty_trailers")),
h2_watermark_improvements_(Runtime::runtimeFeatureEnabled(
"envoy.reloadable_features.enable_h2_watermark_improvements")),
dispatching_(false), raised_goaway_(false), pending_deferred_reset_(false),
send_pending_frames_cb_(connection_.dispatcher().createSchedulableCallback(
[this]() -> void { sendPendingFramesAndHandleError(); })),
random_(random_generator) {
if (http2_options.has_connection_keepalive()) {
keepalive_interval_ = std::chrono::milliseconds(
Expand Down Expand Up @@ -681,9 +719,11 @@ ConnectionImpl::StreamImpl* ConnectionImpl::getStream(int32_t stream_id) {

int ConnectionImpl::onData(int32_t stream_id, const uint8_t* data, size_t len) {
StreamImpl* stream = getStream(stream_id);
// onData callback only triggers if the stream is still alive when the data arrives.
ASSERT(stream != nullptr);
// If this results in buffering too much data, the watermark buffer will call
// pendingRecvBufferHighWatermark, resulting in ++read_disable_count_
stream->pending_recv_data_.add(data, len);
stream->pending_recv_data_->add(data, len);
// Update the window to the peer unless some consumer of this stream's data has hit a flow control
// limit and disabled reads on this stream
if (!stream->buffersOverrun()) {
Expand Down Expand Up @@ -838,10 +878,10 @@ Status ConnectionImpl::onFrameReceived(const nghttp2_frame* frame) {
// It's possible that we are waiting to send a deferred reset, so only raise data if local
// is not complete.
if (!stream->deferred_reset_) {
stream->decoder().decodeData(stream->pending_recv_data_, stream->remote_end_stream_);
stream->decoder().decodeData(*stream->pending_recv_data_, stream->remote_end_stream_);
}

stream->pending_recv_data_.drain(stream->pending_recv_data_.length());
stream->pending_recv_data_->drain(stream->pending_recv_data_->length());
break;
}
case NGHTTP2_RST_STREAM: {
Expand Down Expand Up @@ -887,7 +927,14 @@ int ConnectionImpl::onFrameSend(const nghttp2_frame* frame) {
case NGHTTP2_HEADERS:
case NGHTTP2_DATA: {
StreamImpl* stream = getStream(frame->hd.stream_id);
stream->local_end_stream_sent_ = frame->hd.flags & NGHTTP2_FLAG_END_STREAM;
// Check if stream is still active before recording flag state. onFrameSend callbacks runs once
// the frame is serialized to the output buffer; queued frames can be sent after the associated
// stream has been terminated.
// TODO(antoniovicente) Is this delay recording local_end_stream_sent_ when writes
// NGHTTP2_ERR_WOULDBLOCK acceptable?
if (stream != nullptr) {
stream->local_end_stream_sent_ = frame->hd.flags & NGHTTP2_FLAG_END_STREAM;
}
break;
}
}
Expand Down Expand Up @@ -953,6 +1000,12 @@ void ConnectionImpl::addOutboundFrameFragment(Buffer::OwnedImpl& output, const u

ssize_t ConnectionImpl::onSend(const uint8_t* data, size_t length) {
ENVOY_CONN_LOG(trace, "send data: bytes={}", connection_, length);
if (h2_watermark_improvements_ && connection_.aboveHighWatermark()) {
// The network connection's output buffer is full. Return NGHTTP2_ERR_WOULDBLOCK so nghttp2
// queues the control frame until the buffer's size drops to its low-watermark.
return NGHTTP2_ERR_WOULDBLOCK;
}

Buffer::OwnedImpl buffer;
addOutboundFrameFragment(buffer, data, length);

Expand Down Expand Up @@ -1119,7 +1172,7 @@ Status ConnectionImpl::sendPendingFrames() {

// After all pending frames have been written into the outbound buffer check if any of
// protocol constraints had been violated.
Status status = protocol_constraints_.checkOutboundFrameLimits();
Status status = protocol_constraints_.checkOutboundFrameLimits(Nghttp2Session(*session_));
if (!status.ok()) {
ENVOY_CONN_LOG(debug, "error sending frames: Too many frames in the outbound queue.",
connection_);
Expand Down Expand Up @@ -1232,8 +1285,7 @@ ConnectionImpl::Http2Callbacks::Http2Callbacks() {
[](nghttp2_session*, nghttp2_frame* frame, const uint8_t* framehd, size_t length,
nghttp2_data_source* source, void*) -> int {
ASSERT(frame->data.padlen == 0);
static_cast<StreamImpl*>(source->ptr)->onDataSourceSend(framehd, length);
return 0;
return static_cast<StreamImpl*>(source->ptr)->onDataSourceSend(framehd, length);
});

nghttp2_session_callbacks_set_on_begin_headers_callback(
Expand Down Expand Up @@ -1407,7 +1459,7 @@ RequestEncoder& ClientConnectionImpl::newStream(ResponseDecoder& decoder) {
// If the connection is currently above the high watermark, make sure to inform the new stream.
// The connection can not pass this on automatically as it has no awareness that a new stream is
// created.
if (connection_.aboveHighWatermark()) {
if (!h2_watermark_improvements_ && connection_.aboveHighWatermark()) {
stream->runHighWatermarkCallbacks();
}
ClientStreamImpl& stream_ref = *stream;
Expand All @@ -1424,6 +1476,8 @@ Status ClientConnectionImpl::onBeginHeaders(const nghttp2_frame* frame) {
RETURN_IF_ERROR(trackInboundFrames(&frame->hd, frame->headers.padlen));
if (frame->headers.cat == NGHTTP2_HCAT_HEADERS) {
StreamImpl* stream = getStream(frame->hd.stream_id);
// Header frames are discarded if the stream is no longer around when the frame arrives.
ASSERT(stream != nullptr);
stream->allocTrailers();
}

Expand Down Expand Up @@ -1500,12 +1554,14 @@ Status ServerConnectionImpl::onBeginHeaders(const nghttp2_frame* frame) {
ASSERT(frame->headers.cat == NGHTTP2_HCAT_HEADERS);

StreamImpl* stream = getStream(frame->hd.stream_id);
// Header frames are discarded if the stream is no longer around when the frame arrives.
ASSERT(stream != nullptr);
stream->allocTrailers();
return okStatus();
}

ServerStreamImplPtr stream(new ServerStreamImpl(*this, per_stream_buffer_limit_));
if (connection_.aboveHighWatermark()) {
if (!h2_watermark_improvements_ && connection_.aboveHighWatermark()) {
stream->runHighWatermarkCallbacks();
}
stream->request_decoder_ = &callbacks_.newStream(*stream);
Expand Down Expand Up @@ -1560,7 +1616,7 @@ Http::Status ServerConnectionImpl::dispatch(Buffer::Instance& data) {

Http::Status ServerConnectionImpl::innerDispatch(Buffer::Instance& data) {
// Make sure downstream outbound queue was not flooded by the upstream frames.
RETURN_IF_ERROR(protocol_constraints_.checkOutboundFrameLimits());
RETURN_IF_ERROR(protocol_constraints_.checkOutboundFrameLimits(Nghttp2Session(*session_)));
return ConnectionImpl::innerDispatch(data);
}

Expand Down
Loading