Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding priority levels to Envoy endpoints (not yet used for LB) #2088

Merged
merged 15 commits into from
Nov 30, 2017

Conversation

alyssawilk
Copy link
Contributor

@alyssawilk alyssawilk commented Nov 21, 2017

Signed-off-by: Alyssa Wilk alyssar@chromium.org

Description:
Reworking the organization of endpoints from HostSets-with-localities to a tree with PrioritySets at the top and HostSets as nodes.
This allows the failover characteristics requested in #1929 as well as satisfying the defunct #1354
This will be followed up by a PR (or set of PRs) converting the load balancers over to use the new priority logic.

API Changes:
envoyproxy/data-plane-api#259

Risk Level: High
It's a pretty big refactor and there were some subtle gotchas (like stats initially updating before the backends)

Testing:
Current tests passing.
health checker unit tests verify all hosts are health checked across priority groups
outlier detection unit tests do the same for outlier detection.
upstream_impl_test.cc has unit testing basic priority set operations
load stats integration test extended for priorities. Verifies stats reporting insofar as there's now way to direct load yet, but the localities show up as expected.

Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
@alyssawilk
Copy link
Contributor Author

@htuch @mattklein123 can you take a look? If the basic approach is good I'll fix up tests and ping back

@mattklein123
Copy link
Member

Yes I will look this morning. Sorry for the delay.

Copy link
Member

@mattklein123 mattklein123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alyssawilk I didn't review in detail but did a quick skim. At an interface level this makes sense to me so I would say let's complete this PR. I think it's good to split the LB changes into another PR if that is possible since this is going to be complicated to do a full review on.

@@ -42,7 +42,7 @@ EdsClusterImpl::EdsClusterImpl(const envoy::api::v2::Cluster& cluster, Runtime::
void EdsClusterImpl::startPreInit() { subscription_->start({cluster_name_}, *this); }

void EdsClusterImpl::onConfigUpdate(const ResourceVector& resources) {
std::vector<HostSharedPtr> new_hosts;
std::vector<std::vector<HostSharedPtr>> new_hosts(4); // Large enough to avoid most resizes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed by changing around the memory model (which I needed to do in case there were priority levels with no updates)

priority_set_.addMemberUpdateCb([this](uint32_t priority,
const std::vector<HostSharedPtr>& hosts_added,
const std::vector<HostSharedPtr>& hosts_removed) {
// FIXME(alyssawilk) talk to Matt about how to change stats in prod.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO it's probably better to just keep the current stats, and also have per-priority stats? (Perhaps only per-priority if there are > 1 priorities).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SG. I'll leave these as-is and we can add per-priority when/if folks want 'em

We may be able to get away with just having priority in LB reports, and I've added a TODO for that

Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
@alyssawilk alyssawilk changed the title WIP: Adding priority levels to Envoy endpoints (not yet used for LB) Adding priority levels to Envoy endpoints (not yet used for LB) Nov 28, 2017
Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
Copy link
Member

@htuch htuch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some early feedback.

@@ -119,19 +119,22 @@ class Host : virtual public HostDescription {
typedef std::shared_ptr<const Host> HostConstSharedPtr;

/**
* Base host set interface. This is used both for clusters, as well as per thread/worker host sets
* used during routing/forwarding.
* Base host set interface. This contains all of the endpoints for a given LocalityLbEndpoints
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: the war on two spaces continues..

virtual void updateHosts(
std::shared_ptr<const std::vector<HostSharedPtr>> hosts,
std::shared_ptr<const std::vector<HostSharedPtr>> healthy_hosts,
std::shared_ptr<const std::vector<std::vector<HostSharedPtr>>> hosts_per_locality,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think (?) some typedefs might makes this more readable.

*
* @param hosts supplies the (usually new) list of hosts in the host set.
* @param healthy hosts supplies the subset of hosts which are healthy.
* @param hosts_per_locality supplies the hosts subdivided by locality.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At a high level, without diving into implementation, it seems that it's redundant to track and supply both (healthy) hosts and (healthy) hosts per locality, one of these (presumably the latter) should be sufficient. I know this is an extant thing, just wanted to start a thread to discuss this with you and @mattklein123 .

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, but as it turns out we track locality inconsistently (I think the rule is only if there are multiple localities and one is local and it's configured?) so it's quite non-trivial to combine the two.

I wonder if the better move might be to always just have the "locality style" group and if we don't care to pick by locality it's just in one big group. I know the healthy host is used for some things (mainly size for determining if it's a global panic, but size could be latched) but I can't remember if the full host list is ever used when splitting up by locality.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously this was hidden inside the implementation and is split out like this for perf reasons (basically compute once and then use everywhere). @alyssawilk do we need to expose this at the interface layer? I agree with @htuch at the interface level it's not great.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's that, or doing a bunch of casts, since the generic priority set impl is handing out containers of HostSets not HostSet Impls. And because they're complex types (it's super handy to iterate over the vector) I believe we can't just override.

If we prefer casts in upstream_impl.cc I'm happy to do that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't thought through all the details but if you think this is the way to go it's fine with me. I can take a more complete look through this tomorrow if that would help. Sorry for the delay.

/**
* @return the host sets for this priority. If no such host set exists, it will be created.
*/
virtual HostSet& getHostSet(uint32_t priority) PURE;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we ever remove hosts sets at a given priority?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope. I thought that was more of an implementation detail though so that's commented upstream_impl.h

* Install a callback that will be invoked when any of the HostSets in the PrioritySet changes.
* This includes when a new HostSet is created.
*
* @param callback supplies the callback to invoke.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: document @return

* Returns the host sets for this priority set, ordered by priority.
* The first element in the vector is the host set for priority 0, and so on.
*
* @return the host sets for this priority set.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: include return type in @return.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.
We're fairly inconsistent on this point and I think it makes it a lot uglier but shrug

@@ -59,20 +60,42 @@ void EdsClusterImpl::onConfigUpdate(const ResourceVector& resources) {
cluster_load_assignment.cluster_name()));
}
for (const auto& locality_lb_endpoint : cluster_load_assignment.endpoints()) {
uint32_t priority = locality_lb_endpoint.priority().value();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: const

@@ -59,20 +60,42 @@ void EdsClusterImpl::onConfigUpdate(const ResourceVector& resources) {
cluster_load_assignment.cluster_name()));
}
for (const auto& locality_lb_endpoint : cluster_load_assignment.endpoints()) {
uint32_t priority = locality_lb_endpoint.priority().value();
if (new_hosts.size() <= priority) {
new_hosts.resize(priority + 1);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to make this structure std::map, i.e. ordered, to support sparse priority use? I'm thinking not, since probably folks will use dense priorities in practice, but throwing it out there.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah as commented in the eds proto you can't skip. We don't have config validation so I just make sure the code works regardless but it doesn't make sense to do so reorganizing the data structures to optimize for it seems silly.

rq_success += host->stats().rq_success_.latch();
rq_error += host->stats().rq_error_.latch();
rq_active += host->stats().rq_active_.value();
for (auto& host_set : cluster.prioritySet().hostSetsPerPriority()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should be merging across all priorities for a given locality here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really? I thought we should report stats per-priority (and I have a TODO for that somewhere).
If you prefer merging comment and I'll remove the TODO and merge.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed IRL, it would provide greatest flexibility to do it per-priority as you plan. We should do something in the interim to avoid surprises here, since the code as implemented is just going to provide multiple reports when priorities are used for the same locality, with no way to distinguish them (other than implied ordering). I'd suggest merging for now and adding priorities later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could do that. Alternately if you don't object I can just only report load for priority group 0, and TODO to add the stats reporting before I fix the load balancers.

Without the load balancers priority-aware it's impossible to get load at other priority clusters, and that's a faster workaround until we do the API change and real load reporting (which is trivial)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, priority 0 only and a TODO is fine for this PR, thanks.

@@ -71,7 +71,7 @@ class SubsetLoadBalancer : public LoadBalancer, Logger::Loggable<Logger::Id::ups
LoadBalancerPtr lb_;
};

// Implements HostSet::MemberUpdateCb
// called by HostSet::MemberUpdateCb
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: s/called/Called/

@@ -94,7 +94,9 @@ class SubsetLoadBalancer : public LoadBalancer, Logger::Loggable<Logger::Id::ups

SubsetMetadata extractSubsetMetadata(const std::set<std::string>& subset_keys, const Host& host);

const HostSetImpl& emptyHostSet() { CONSTRUCT_ON_FIRST_USE(HostSetImpl); };
const HostSetImpl& emptyHostSet() {
CONSTRUCT_ON_FIRST_USE(HostSetImpl, original_host_set_.priority());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a comment stating that the host set priority will never change, hence why this is valid to do.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Turns out this function isn't used

if (host_sets_.size() < priority + 1) {
for (size_t i = host_sets_.size(); i <= priority; ++i) {
host_sets_.push_back(HostSetPtr{new HostSetImpl(i)});
host_sets_[i]->addMemberUpdateCb([this](uint32_t priority,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is another place it seems that std::map implementation might be cleaner, but not sure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think map will be uglier when we start selecting priority levels, because you're not guaranteed in-order priorities or existence. Vector with empty sets makes more sense to me.

Also one is not supposed to skip priorities, we just dont have config sanitization yet.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, I don't think it's broken to skip priorities, in the same sense that it's not broken to assign processes to different priority levels in a scheduler where there is no contiguity. If it simplifies things I guess it's fine.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

proto validation will never do this type of checking for us. I would be fine having a check that all priorities need to be sequential and do this check in the EDS response. If we did this, we could also pre-create the host sets. I actually wonder if we really need the getHostSet() function which auto creates. It seems less error prone to just operate on the returned vector? Thoughts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe we can get rid of getHostSet from the API pretty easily, so that's done.

At this point we could replace getHostSet with
if (getHostSetsPerPriority.size() < priority) { createHostSet(priority)} but that seems suboptimal :-P
I'll rename getOrCreateHostSet to make the create explicit. SG?

@htuch
Copy link
Member

htuch commented Nov 28, 2017

This looks rad overall.

Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
Copy link
Member

@htuch htuch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, the only non-trivial thing here on my end is to do the load report merge (or some other mitigation to avoid getting multiple reports for the same locality), plus @mattklein123 comment on interface.

rq_success += host->stats().rq_success_.latch();
rq_error += host->stats().rq_error_.latch();
rq_active += host->stats().rq_active_.value();
for (auto& host_set : cluster.prioritySet().hostSetsPerPriority()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed IRL, it would provide greatest flexibility to do it per-priority as you plan. We should do something in the interim to avoid surprises here, since the code as implemented is just going to provide multiple reports when priorities are used for the same locality, with no way to distinguish them (other than implied ordering). I'd suggest merging for now and adding priorities later.

if (host_sets_.size() < priority + 1) {
for (size_t i = host_sets_.size(); i <= priority; ++i) {
host_sets_.push_back(HostSetPtr{new HostSetImpl(i)});
host_sets_[i]->addMemberUpdateCb([this](uint32_t priority,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, I don't think it's broken to skip priorities, in the same sense that it's not broken to assign processes to different priority levels in a scheduler where there is no contiguity. If it simplifies things I guess it's fine.


ASSERT_EQ(2, cluster_->prioritySet().hostSetsPerPriority().size());
ASSERT_EQ(2, cluster_->prioritySet().getHostSet(0).hosts().size());
ASSERT_EQ(1, cluster_->prioritySet().getHostSet(1).hosts().size());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EXPECT_EQ vs. ASSERT_EQ here?

ASSERT_EQ(4, cluster_->prioritySet().getHostSet(3).hosts().size());
}

TEST_F(EdsTest, PriorityAndLocality) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please describe what this test does in a one liner here.

EXPECT_EQ(2, second_hosts_per_locality[1].size());
}

add_hosts_to_locality_and_priority("oceania", "koala", "eucalyptus", 0, 3);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add comments as in the above TEST in the body of this one.

respond(0, "200", false, true);
respond(1, "200", false, true);
EXPECT_TRUE(cluster_->prioritySet().getMockHostSet(0)->hosts_[0]->healthy());
EXPECT_TRUE(cluster_->prioritySet().getMockHostSet(1)->hosts_[0]->healthy());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two tests are very similar, could refactor but up to you (if there was a 3rd copy I'd be more inclined to do it). Also, please add a comment line above each test (although it's fairly obvious here from the name).

@@ -638,6 +647,54 @@ TEST(StaticClusterImplTest, SourceAddressPriority) {
}
}

TEST(PrioritySet, Extend) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment.

uint32_t changes = 0;
uint32_t last_priority = 0;
priority_set.addMemberUpdateCb([&](uint32_t priority, const std::vector<HostSharedPtr>&,
const std::vector<HostSharedPtr>&) -> void {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: -> void is redundant.

Copy link
Member

@mattklein123 mattklein123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just looked through in more detail. In general this looks great. Beyond @htuch comments, my main question is whether we can simplify slightly and get rid of getHostSet(i) function?

@@ -264,11 +264,13 @@ ClusterManagerStats ClusterManagerImpl::generateStats(Stats::Scope& scope) {
}

void ClusterManagerImpl::postInitializeCluster(Cluster& cluster) {
if (cluster.hosts().empty()) {
return;
for (size_t i = 0; i < cluster.prioritySet().hostSetsPerPriority().size(); ++i) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Any reason to not just iterate directly through the returned vector here? Why do we need to get the size and then call get each time? (Same question if this happens elsewhere)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yeah, now that host set latches its own priority I can rewrite these.

if (host_sets_.size() < priority + 1) {
for (size_t i = host_sets_.size(); i <= priority; ++i) {
host_sets_.push_back(HostSetPtr{new HostSetImpl(i)});
host_sets_[i]->addMemberUpdateCb([this](uint32_t priority,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

proto validation will never do this type of checking for us. I would be fine having a check that all priorities need to be sequential and do this check in the EDS response. If we did this, we could also pre-create the host sets. I actually wonder if we really need the getHostSet() function which auto creates. It seems less error prone to just operate on the returned vector? Thoughts?

@@ -73,7 +73,7 @@ class SubsetLoadBalancer : public LoadBalancer, Logger::Loggable<Logger::Id::ups
LoadBalancerPtr lb_;
};

// Implements HostSet::MemberUpdateCb
// Called by HostSet::MemberUpdateCb
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: del "Called by" to match rest of code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's the standard when we're implementing an interface, but is it when we're explaining the caller of a function?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry nevermind!

info_->stats().membership_change_.inc();
}

uint32_t healthy_hosts = 0;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe add a TODO here for per-priority stats?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sanity check: do you believe anyone wants them yet?
on more thought, I think we can get away with remotely reported load stats and I'd rather not add them unless someone wants them.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't imagine that anyone that actually uses priority wouldn't want broken out stats, but given that we won't be using this at Lyft right now we can wait until someone requests. :)

Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
Copy link
Member

@mattklein123 mattklein123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks for the interface changes. I didn't review the tests, so will defer to @htuch or someone else for final approval.

@@ -602,43 +610,50 @@ ClusterManagerImpl::ThreadLocalClusterManagerImpl::ClusterEntry::ClusterEntry(
parent.parent_.local_info_, parent.parent_, parent.parent_.runtime_,
parent.parent_.random_,
Router::ShadowWriterPtr{new Router::ShadowWriterImpl(parent.parent_)}) {

// TODO(alyssawilk) make lb priority-set aware in a follow-up patch
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: period end of sentence.

HostVectorSharedPtr current_hosts_copy(new std::vector<HostSharedPtr>(hosts()));
for (size_t i = 0; i < new_hosts.size(); ++i) {
if (new_hosts[i] != nullptr) {
UpdateHostsPerLocality(priority_set_.getOrCreateHostSet(i), *new_hosts[i]);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: updateHostsPerLocality

@@ -312,6 +347,9 @@ class ClusterImplBase : public Cluster,
const LocalInfo::LocalInfo& local_info,
Outlier::EventLoggerSharedPtr outlier_event_logger,
bool added_via_api);
// From Cluster
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: // Upstream::Cluster

Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
EXPECT_EQ(4, cluster_->prioritySet().hostSetsPerPriority()[3]->hosts().size());
}

// Set up an EDS config with multiple priorities and localitie and make sure
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: localities (not worth blocking merge).

@htuch htuch merged commit d7f07f1 into envoyproxy:master Nov 30, 2017
@alyssawilk
Copy link
Contributor Author

alyssawilk commented Dec 4, 2017 via email

@alyssawilk alyssawilk deleted the priorities branch December 13, 2017 20:54
Shikugawa pushed a commit to Shikugawa/envoy that referenced this pull request Mar 28, 2020
jpsim pushed a commit that referenced this pull request Nov 28, 2022
Description: Adds public API and support for providing a list of domains to which H2 connections will be established *without* protocol negotiation (i.e. ALPN). See Envoy docs for the domain name patterns supported:
https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/route/v3/route_components.proto#envoy-v3-api-msg-config-route-v3-route

Risk Level: Low
Testing: Unit

Co-authored-by: Mike Schore <mike.schore@gmail.com>
Signed-off-by: Mike Schore <mike.schore@gmail.com>
Signed-off-by: JP Simard <jp@jpsim.com>
jpsim pushed a commit that referenced this pull request Nov 29, 2022
Description: Adds public API and support for providing a list of domains to which H2 connections will be established *without* protocol negotiation (i.e. ALPN). See Envoy docs for the domain name patterns supported:
https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/route/v3/route_components.proto#envoy-v3-api-msg-config-route-v3-route

Risk Level: Low
Testing: Unit

Co-authored-by: Mike Schore <mike.schore@gmail.com>
Signed-off-by: Mike Schore <mike.schore@gmail.com>
Signed-off-by: JP Simard <jp@jpsim.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants