Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[gzip]: allow gzip to work w/ http backend w/o content-lenght #14584

Merged
merged 24 commits into from
Mar 11, 2021

Conversation

tehnerd
Copy link
Contributor

@tehnerd tehnerd commented Jan 6, 2021

Signed-off-by: Nikita V. Shirokov tehnerd@tehnerd.com

For an explanation of how to fill out the fields, please see the relevant section
in PULL_REQUESTS.md

Commit Message:
this is an attempt to address issue in #14121
tl;dr is that some http/2 backends does not send "content-length" header in replies, as http/2 spec do have the same info as a part of the frame (unfortunately it seems like there is no way to pass this value from the frame to the filter) and transfer-encoding=chunked (before this diff having that header/encoding was a prerequisite, if content-length is not defined) was removed from http/2 spec.
As discussed in the issue itself, instead, if there is no content lengths header - we would try to gzip it by default.
this new behavior is controlled by runtime guarded feature envoy.reloadable_features.enable_compression_without_chunked_header

Additional Description:
This is rework of PR#14405. Compare to original PR this one has proper handling of websockets (or any other) upgrade requests.
Compare to original PR this one includes:

  1. additional is(H2)Upgrade checks in encode/decodeHeaders functions in compressor filter
  2. websockets integration tests now contains compression filter config

Risk Level:
Low
Testing:
same tests as in original PR#14405 plus
websocket's integration tests now contains compression filter and passing (w/o isUpgrade fix and w/ compression filter this ITs were failing with:

 Actual: 0x7fff5ae54678 (of type Envoy::Http::TestResponseHeaderMapImpl*), 
==============================Expected header map:==============================
':status', '101'
'connection', 'upgrade'
'upgrade', 'websocket'
=======================is not equal to actual header map:=======================
':status', '101'
'connection', 'upgrade'
'upgrade', 'websocket'
'vary', 'Accept-Encoding'
================================================================================

Docs Changes:
Release Notes:
Platform Specific Features:
[Optional Runtime guard:]
[Optional Fixes #Issue]
[Optional Deprecated:]

…ransfer encoding

Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
@tehnerd
Copy link
Contributor Author

tehnerd commented Jan 6, 2021

cc @rgs1 @alyssawilk

name: test
typed_config:
"@type": type.googleapis.com/envoy.extensions.compression.gzip.compressor.v3.Gzip
)EOF");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to add a similar change to the http2 integration test? To make sure compression actually works for h2?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While my intuition is it shouldn't be done, I don't see anything in either spec about legality of compressing HTTP/1 connect payload which makes me wonder if it's legal, and if it is legal if we should support it by default.

https://tools.ietf.org/html/rfc7540#section-8.3
https://tools.ietf.org/html/rfc8441

@PiotrSikora @lizan thoughts?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually @rgs1 since you reported the issue in the first place, do you know if it broke WS on HTTP/1, HTTP/2 or both? If it's not supported globally it'd be a useful data point.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on HTTP/1 (downstream & upstream)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair. Either way I agree that integration tests for HTTP/2 would be good to have.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Transfer-Encoding is not allowed on CONNECT responses. RFC 7230 s3.3.1 states:

A server MUST NOT send a Transfer-Encoding header field in any 2xx (Successful) response to a CONNECT request

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry. i little bit lost here: are we talking about H/2 integration test for WS or ? If former - i think initializer (where this new compression config is added) is protocol agnostic and most likely used in all of the integration tests (and i assume we do have h/2 ws)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, if it runs for HTTP/1 and HTTP/2 you're fine here.
Unfortunately I think including extensions in core tests is legal so I think you're going to have to add similar test to the compressor lib.

Copy link
Contributor

@alyssawilk alyssawilk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for jumping on this fix so fast!

@@ -64,6 +64,7 @@ Removed Config or Runtime

New Features
------------
* compression: added new runtime guard `envoy.reloadable_features.enable_compression_without_chunked_header` (enabled by default) to control compression if no transfer-encoding=chunked header exists (now envoy would compress in such cases) to account for http/2 connections, where such transfer-encoding were removed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

compression: extended the compression allow compressing when the content length header is not present. This behavior may be temporarily reverted by setting envoy.reloadable_features.enable_compression_without_chunked_header to false.

@@ -495,6 +497,10 @@ bool CompressorFilter::isEtagAllowed(Http::ResponseHeaderMap& headers) const {

bool CompressorFilterConfig::DirectionConfig::isMinimumContentLength(
const Http::RequestOrResponseHeaderMap& headers) const {
if (StringUtil::caseFindToken(headers.getTransferEncodingValue(), ",",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can this ever be true? I think the transfer encoding headers are removed before the filter ever sees it. It would also result in HTTP/1.1 and HTTP/2 responses without content length being treated differently. I think this was buggy prior to your change but I'd be inclined to fix and test while we're in here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why h1 and h2 would be different? (i this diff we do not distinct h1 vs h2 in compressor. the runtime feature just either permit or deny compression if no content-length header). and yes it seems that you are right. at least manual testing shows that there is no compression if only trunsfer-encoding=chunked present. do you propose by somehow passing transfer-encoding into the filter or just remove this condition? I think originally idea was that is it is chunked - we making assumption that response is big enough and worth compressing (the whole idea behind isMinimumContentLength (from my understanding) is to be able to do some kind of performance optimization (do not compress something which is really small)) so from my point of view we can just omit this logic (unless @rojkov has some other thoughts on this)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So HTTP/1 on the wire, has a transfer-encoding: chunked header for (most) responses without a content length. The alternate is framing by connection close.
HTTP/2 disallows the transfer encoding header though data is implicitly chunked up by HTTP/2 data frames.

In Envoy, the HCM removes any transfer encoding header before this code is called. If you add an integration test with T-E:chunked, and an assert here, I believe it can never trigger.
I believe the code needs to simply have one behavior if content length is present, and one if content length is absent, and not rely on the transfer encoding header.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code is quite old. I guess the original thinking was indeed to assume the payload is big enough if transfer-encoding: chunked is present.

In Envoy, the HCM removes any transfer encoding header before this code is called.

Is it about the chunked part only or fully removed? If any t-e header is removed in HCM then also CompressorFilter::isTransferEncodingAllowed() becomes useless.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I thought it was removed early but looks like I may be incorrect:

https://github.com/envoyproxy/envoy/blob/master/source/common/http/conn_manager_utility.cc#L398
response_headers.removeTransferEncoding();

called here:
https://github.com/envoyproxy/envoy/blob/master/source/common/http/conn_manager_impl.cc#L1385

which looks like it's right before it's sent the codec. Either way a test would be good to make sure we're consistent between HTTP/1 and HTTP/2 without content length, SG?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What tests are we talking about? for the WS/Update (to make sure that h1 and h2 are the same) or something else?

!hasCacheControlNoTransform(headers) && isEtagAllowed(headers) &&
!headers.getInline(response_content_encoding_handle.handle());
const bool isCompressible =
isEnabledAndContentLengthBigEnough && !Http::Utility::isUpgrade(headers) &&
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't the new upgrade also be runtime guarded? maybe have a temporary bool for checking this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure will add runtime check here as well

name: test
typed_config:
"@type": type.googleapis.com/envoy.extensions.compression.gzip.compressor.v3.Gzip
)EOF");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, if it runs for HTTP/1 and HTTP/2 you're fine here.
Unfortunately I think including extensions in core tests is legal so I think you're going to have to add similar test to the compressor lib.

@@ -26,6 +26,9 @@ envoy_cc_extension(
name = "config",
srcs = ["config.cc"],
hdrs = ["config.h"],
extra_visibility = [
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately the reason this wasn't visible is we don't allow Envoy core code to depend on extensions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just for the reference (mostly for myself so it would be easier to find in future). this should be reworked because of #9953

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it's a relatively new policy. Before we occasionally broke people's builds if they opted out of certain extensions, then fixed it after they called it out. Better to not break in the first place :-P

If there's a place we could call that out where you would have seen it, please improve the docs in this area. It's totally intuitive if something fails to build due to visibility to just fix it. Normally we put such checks in the format scripts, but unfortunately we can't with this one until we clean up all the legacy visibility rules.

Copy link
Contributor Author

@tehnerd tehnerd Jan 7, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed visibility to test/extensions. i hope this is ok (for test/extensions to depend on source/extensions)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is it needed? I think we also don't want cross-extension dependencies, so explicit lists would be better. If that's onerous we could at least say //test/extensions/filters/http:subpackages ? I'll defer to @lizan if he has thoughts/preferences.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah. looks like i still can use this target in tests even w/o explicit "extra_visibility". will remove

1. isUpgrade logic now runtime guarded as well
2. renaming runtime guard to
   enable_compression_without_content_length_header as it is now has
nothing to do w/ chunked encoding
3. removed transfer-encoding=chunked logic (we stops to rely on it)
4. copy pasted websocket integration tests to compressor (only related
   to upgrade)

Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
@@ -0,0 +1,248 @@
#include "test/extensions/filters/http/common/compressor/websocket_with_compressor_integration_test.h"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this (and .h) is copy/pasted test/integration/websocket_integration_test (i've just removed few things which are not related to upgrade and renamed class/structs by adding WithCompressor (as well as added compressor configuration itself))

Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
@tehnerd
Copy link
Contributor Author

tehnerd commented Jan 14, 2021

@alyssawilk @rojkov somehow lost in translation/discussion. What does it take for this to move forward? there were some discussion about itest, but i missed the part - were we talking about WS related tests (which is done) or there were some other requirments (e.g. IT for transfer-encoding etc)?

@alyssawilk
Copy link
Contributor

alyssawilk commented Jan 14, 2021 via email

@alyssawilk
Copy link
Contributor

alyssawilk commented Jan 14, 2021 via email

@tehnerd
Copy link
Contributor Author

tehnerd commented Jan 14, 2021

I think I mainly want an integration test that WS doesn't get zipped for HTTP/1 or HTTP/2, and we do a manual check that the prior PR would fail that test, to make sure we're regression testing the old bug. If we have that test, I think you can just do a master merge and make sure CI is happy, and I'm up for doing another pass!

This one is done. As i've mentioned in "Testing" part of PR w/o new checks WS integration test fails as it starts to have "Accept-Encoding: vary" header added by compression filter

@tehnerd
Copy link
Contributor Author

tehnerd commented Jan 14, 2021

Oh, sorry, also making sure we integration test that both HTTP/1 and HTTP2 handle compressing when content length is absent (i.e. rely on the absence of the C-L header, rather than the presence of the T-E header) On Thu, Jan 14, 2021 at 2:53 PM Alyssa (Rzeszutek) Wilk alyssar@google.com wrote:

T-E:chunked logic was removed from compression filter. so current logic is

  1. check if c-l present. if so - check if it is bigger than configured minimal value
  2. if not - compress by default.

and another check which is making sure that request is not "update" (by reusing Utilit::isUpgrade).

so current version of the filter is http version agnostic (as t-e was removed)

going to do merge with muster and update the diff

Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
@@ -34,6 +34,45 @@ Removed Config or Runtime

New Features
------------
* compression: extended the compression allow compressing when the content length header is not present. This behavior may be temporarily reverted by setting `envoy.reloadable_features.enable_compression_without_content_length_header` to false.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm looks like bad automerge. fill fix in a few

Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
std::make_tuple("transfer-encoding", "test\t, chunked\t", false),
std::make_tuple("x-garbage", "no_value", true)));

TEST_P(IsTransferEncodingAllowedTest, Validate) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this test removed? I think we are still interested in testing that already compressed payloads are not compressed twice.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will move back

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wait. no. this is transfer-encoding tests. already encoded content would have "content-encoding" headers. not transfer-encoding. this is why i've removed it

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The upstream can set either "content-encoding" (encoding as a property of the resource) or "transfer-encoding" (encoding as a property of the message). In both cases the payload would be compressed, but in the latter case "content-encoding" would be missing.

Base automatically changed from master to main January 15, 2021 23:02
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
Copy link
Contributor

@alyssawilk alyssawilk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, looking good!

@@ -26,6 +26,9 @@ envoy_cc_extension(
name = "config",
srcs = ["config.cc"],
hdrs = ["config.h"],
extra_visibility = [
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is it needed? I think we also don't want cross-extension dependencies, so explicit lists would be better. If that's onerous we could at least say //test/extensions/filters/http:subpackages ? I'll defer to @lizan if he has thoughts/preferences.

if (!end_stream && request_config.compressionEnabled() &&
// temp variable to preserve old behavior w/ http upgrade. should be removed when
// enable_compression_without_content_length_header runtime variable would be removed
const bool allowUpgrade =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-> is_upgrade_allowed here and below?

Http::Headers::get().TransferEncodingValues.Chunked);
if (Runtime::runtimeFeatureEnabled(
"envoy.reloadable_features.enable_compression_without_content_length_header")) {
// returning true to account for HTTP/2 where content-length is optional
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and HTTP/1.1 chunked bodies, no?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was this one overlooked?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you please elaborate on the last comment?. w/ this new diff we would compress even if there is no "transfer-encoding=chunked" header (if runtime is set to true) (aka there is no point to do separate check for chunked. because we would have logic like:

if true || chunked {
  return true;
}

this check would always be true
. if runtime set to false we would fall back to previous behavior aka "return (chunked in headers?)"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, true. Sorry, I meant to make the point that the comment implied that the return only applies to HTTP/2 but it applies to HTTP/1.1 without content length too. How about something like

Return true to ignore the minimum length configuration if no content length is specified in the headers?

)EOF");
doRequestCompression({{":method", "post"}}, false);
Http::TestResponseHeaderMapImpl headers{{":status", "200"}};
doResponseNoCompression(headers);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

your call but I'd be inclined to combine the request and response to
CompressNoContentLength

codec_client_->close();
}

TEST_P(WebsocketWithCompressorIntegrationTest, RouteSpecificUpgrade) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can nix this test.


// Technically not a websocket tests, but verifies normal upgrades have parity
// with websocket upgrades
TEST_P(WebsocketWithCompressorIntegrationTest, NonWebsocketUpgrade) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we also snag

ProxyingConnectIntegrationTest, ProxyConnect

to make sure CONNECT requests and responses are handled correctly too?

1. moving back TE:chunked logic (to make sure that w/ disabled runtime
   there is no change)
2. colapsing UTs for req/response into 1
3. removing route specific update from integration tests
4. removed extra_visibility for gzip

Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
@tehnerd
Copy link
Contributor Author

tehnerd commented Jan 21, 2021

have not added itests for CONNECT as requested by @alyssawilk. so probably no need to review right now

Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
@tehnerd
Copy link
Contributor Author

tehnerd commented Jan 27, 2021

ok. i've added analog of (ProxyingConnectIntegrationTest, ProxyConnect) (the difference is that in initialize compression filter was also added). i think i've addressed all comments and this is ready for review

@repokitteh-read-only
Copy link

Retrying Azure Pipelines:
Retried failed jobs in: envoy-presubmit

🐱

Caused by: a #14584 (comment) was created by @rojkov.

see: more, trace.

@tehnerd
Copy link
Contributor Author

tehnerd commented Feb 4, 2021

hmm. presubmit failing with:

TESTS FAILED:
run_examples@ ci/verify_examples.sh :35 (./websocket)
  > main@ ci/verify_examples.sh :40

however my diff does not touch anything websocket related (core part of it. only handling upgrade in compressor, but looks like this filter does not even enabled in examples:

tehnerd@amanda:~/projects/envoy$ grep -R compres ./examples/websocket/
tehnerd@amanda:~/projects/envoy$ grep -R compres ./examples/
tehnerd@amanda:~/projects/envoy$ grep -R gzip ./examples/
tehnerd@amanda:~/projects/envoy$ grep -R compres ./examples/

so no idea why it is failing.

@rojkov
Copy link
Member

rojkov commented Feb 5, 2021

/retest

@repokitteh-read-only
Copy link

Retrying Azure Pipelines:
Retried failed jobs in: envoy-presubmit

🐱

Caused by: a #14584 (comment) was created by @rojkov.

see: more, trace.

@tehnerd
Copy link
Contributor Author

tehnerd commented Feb 8, 2021

@rojkov @alyssawilk does it require anything else from my side? or we are just waiting for some one to merge this diff in?

@tehnerd
Copy link
Contributor Author

tehnerd commented Feb 8, 2021

oh. looks like upstream changed and new merge required.

Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
@tehnerd
Copy link
Contributor Author

tehnerd commented Feb 8, 2021

presubmit failed with

ERROR: package contains errors: source/extensions/compression/brotli/decompressor
ERROR: error loading package 'source/extensions/compression/brotli/compressor': Package 'source/extensions/compression/brotli/compressor' contains errors

@alyssawilk
Copy link
Contributor

I signed off but for most PRs we have 2 maintainers review. I've tagged @snowp who was out late last week but should be able to do a pass this week.

@tehnerd
Copy link
Contributor Author

tehnerd commented Feb 9, 2021

Thanks for the explanation, @alyssawilk. Question: if GitHub marking pr as "merge conflict" - should I update it before second maintainer will look?(asking because it looks like that GitHub removes "reviewed" tag of the first person if PR is updated)

@alyssawilk
Copy link
Contributor

It's always worth resolving merge conflicts - even if it removes the LGTM stamp, generally we operate on a maintainer signing off, not the actual stamp. If there's significant changes between first and second reviewer signing off, sometimes reviewer#2 will tag reviewer#1 towards the end but it's not the norm.

Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
@alyssawilk
Copy link
Contributor

try one more merge to see if it fixes coverage, and ping for @snowp

Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
Copy link
Contributor

@snowp snowp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks this seems reasonable, just a few comments

Comment on lines +160 to +163
const bool is_not_upgrade =
!Http::Utility::isUpgrade(headers) ||
!Runtime::runtimeFeatureEnabled(
"envoy.reloadable_features.enable_compression_without_content_length_header");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems odd to have the value of is_not_upgrade be dependent on the runtime flag? Can we rename the variable to make a bit more sense?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

idea was that we would drop runtime flag eventually and wont need to rename. if you feel strong about it - i will try to rework this part

Comment on lines +230 to +233
const bool is_not_upgrade =
!Http::Utility::isUpgrade(headers) ||
!Runtime::runtimeFeatureEnabled(
"envoy.reloadable_features.enable_compression_without_content_length_header");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here. Alternatively split it into two different variables

@@ -0,0 +1,297 @@
#include "test/extensions/filters/http/common/compressor/compressor_integration_tests.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would love a high level explanation of what we're testing in this file. I think it's that upgrades work through the compressor filter, but its not clear without reading a lot of code what kind of interactions (if any?) we're trying to validate

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok. will add

@snowp snowp added the waiting label Mar 2, 2021
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
Copy link
Contributor

@snowp snowp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for iterating on this! Seems like CI is broken

@@ -8,6 +8,10 @@

namespace Envoy {

/*
Integration test to make sure that new logic, which does not care about content-length header
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment reads like a PR comment, not a code comment. Can we phrase this in a way that will age better?

For example, Test verifying that we don't add any additional headers when the upstream doesn't respond with a content-length header or something to that effect

@@ -34,6 +38,10 @@ class WebsocketWithCompressorIntegrationTest : public HttpProtocolIntegrationTes
IntegrationStreamDecoderPtr response_;
};

/*
Integration test to make sure that new logic, which does not care about content-length header
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, the "new logic" is only new right now

@snowp snowp added the waiting label Mar 9, 2021
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
Copy link
Contributor

@snowp snowp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@snowp snowp merged commit 099f41e into envoyproxy:main Mar 11, 2021
auni53 pushed a commit to auni53/envoy that referenced this pull request Mar 19, 2021
…xy#14584)

This is an attempt to address issue in envoyproxy#14121

tl;dr is that some http/2 backends does not send "content-length" header in replies, as http/2 spec do have the same info as a part of the frame (unfortunately it seems like there is no way to pass this value from the frame to the filter) and transfer-encoding=chunked (before this diff having that header/encoding was a prerequisite, if content-length is not defined) was removed from http/2 spec.

As discussed in the issue itself, instead, if there is no content lengths header - we would try to gzip it by default.

This new behavior is controlled by runtime guarded feature envoy.reloadable_features.enable_compression_without_chunked_header

Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com>
Signed-off-by: Auni Ahsan <auni@google.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants