Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

upstream: Add ability to disable host selection during panic #8024

Merged
merged 12 commits into from
Sep 11, 2019
5 changes: 5 additions & 0 deletions api/envoy/api/v2/cds.proto
Original file line number Diff line number Diff line change
Expand Up @@ -539,6 +539,11 @@ message Cluster {
// * :ref:`runtime values <config_cluster_manager_cluster_runtime_zone_routing>`.
// * :ref:`Zone aware routing support <arch_overview_load_balancing_zone_aware_routing>`.
google.protobuf.UInt64Value min_cluster_size = 2;

// If set to true, Envoy will not consider any hosts when the cluster is in panic mode.
csssuf marked this conversation as resolved.
Show resolved Hide resolved
// Instead, the cluster will fail all requests as if all hosts are unhealthy. This can help
// avoid potentially overwhelming a failing service.
bool fail_traffic_on_panic = 3;
}
// Configuration for :ref:`locality weighted load balancing
// <arch_overview_load_balancing_locality_weighted_lb>`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,24 @@ Panic threshold

During load balancing, Envoy will generally only consider available (healthy or degraded) hosts in
an upstream cluster. However, if the percentage of available hosts in the cluster becomes too low,
Envoy will disregard health status and balance amongst all hosts. This is known as the *panic
threshold*. The default panic threshold is 50%. This is
Envoy will disregard health status and balance either amongst all hosts or no hosts. This is known
as the *panic threshold*. The default panic threshold is 50%. This is
:ref:`configurable <config_cluster_manager_cluster_runtime>` via runtime as well as in the
:ref:`cluster configuration <envoy_api_field_Cluster.CommonLbConfig.healthy_panic_threshold>`.
The panic threshold is used to avoid a situation in which host failures cascade throughout the
cluster as load increases.

There are two modes Envoy can choose from when in a panic state: traffic will either be sent to all
hosts, or will be sent to no hosts (and therefore will always fail). This is configured in the
:ref:`cluster configuration <envoy_api_field_Cluster.CommonLbConfig.ZoneAwareLbConfig.fail_traffic_on_panic>`.
Choosing to fail traffic during panic scenarios can help avoid overwhelming potentially failing
upstream services, as it will reduce the load on the upstream service before all hosts have been
determined to be unhealthy. However, it eliminates the possibility of _some_ requests succeeding
even when many or all hosts in a cluster are unhealthy. This may be a good tradeoff to make if a
given service is observed to fail in an all-or-nothing pattern, as it will more quickly cut off
requests to the cluster. Conversely, if a cluster typically continues to successfully service _some_
requests even when degraded, enabling this option is probably unhelpful.

Panic thresholds work in conjunction with priorities. If the number of available hosts in a given
priority goes down, Envoy will try to shift some traffic to lower priorities. If it succeeds in
finding enough available hosts in lower priorities, Envoy will disregard panic thresholds. In
Expand All @@ -20,8 +31,8 @@ disregards panic thresholds and continues to distribute traffic load across prio
the algorithm described :ref:`here <arch_overview_load_balancing_priority_levels>`.
However, when normalized total availability drops below 100%, Envoy assumes that there are not enough
available hosts across all priority levels. It continues to distribute traffic load across priorities,
but if a given priority level's availability is below the panic threshold, traffic will go to all hosts
in that priority level regardless of their availability.
but if a given priority level's availability is below the panic threshold, traffic will go to all
(or no) hosts in that priority level regardless of their availability.

The following examples explain the relationship between normalized total availability and panic threshold.
It is assumed that the default value of 50% is used for the panic threshold.
Expand Down
2 changes: 2 additions & 0 deletions docs/root/intro/version_history.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,8 @@ Version history
* upstream: added :ref:`an option <envoy_api_field_Cluster.CommonLbConfig.close_connections_on_host_set_change>` that allows draining HTTP, TCP connection pools on cluster membership change.
* upstream: added network filter chains to upstream connections, see :ref:`filters<envoy_api_field_Cluster.filters>`.
* upstream: use p2c to select hosts for least-requests load balancers if all host weights are the same, even in cases where weights are not equal to 1.
* upstream: added :ref:`an option <envoy_api_field_Cluster.CommonLbConfig.close_connections_on_host_set_change>` that allows draining HTTP, TCP connection pools on cluster membership change.
csssuf marked this conversation as resolved.
Show resolved Hide resolved
* upstream: added :ref:`fail_traffic_on_panic <envoy_api_field_Cluster.CommonLbConfig.ZoneAwareLbConfig.fail_traffic_on_panic>` to allow failing all requests to a cluster during panic state.
* zookeeper: parse responses and emit latency stats.

1.11.1 (August 13, 2019)
Expand Down
43 changes: 30 additions & 13 deletions source/common/upstream/load_balancer_impl.cc
Original file line number Diff line number Diff line change
Expand Up @@ -282,7 +282,8 @@ ZoneAwareLoadBalancerBase::ZoneAwareLoadBalancerBase(
routing_enabled_(PROTOBUF_PERCENT_TO_ROUNDED_INTEGER_OR_DEFAULT(
common_config.zone_aware_lb_config(), routing_enabled, 100, 100)),
min_cluster_size_(PROTOBUF_GET_WRAPPED_OR_DEFAULT(common_config.zone_aware_lb_config(),
min_cluster_size, 6U)) {
min_cluster_size, 6U)),
fail_traffic_on_panic_(common_config.zone_aware_lb_config().fail_traffic_on_panic()) {
ASSERT(!priority_set.hostSetsPerPriority().empty());
resizePerPriorityState();
priority_set_.addPriorityUpdateCb(
Expand Down Expand Up @@ -539,7 +540,7 @@ uint32_t ZoneAwareLoadBalancerBase::tryChooseLocalLocalityHosts(const HostSet& h
return i;
}

ZoneAwareLoadBalancerBase::HostsSource
absl::optional<ZoneAwareLoadBalancerBase::HostsSource>
ZoneAwareLoadBalancerBase::hostSourceToUse(LoadBalancerContext* context) {
auto host_set_and_source = chooseHostSet(context);

Expand All @@ -552,8 +553,12 @@ ZoneAwareLoadBalancerBase::hostSourceToUse(LoadBalancerContext* context) {
// If the selected host set has insufficient healthy hosts, return all hosts.
csssuf marked this conversation as resolved.
Show resolved Hide resolved
if (per_priority_panic_[hosts_source.priority_]) {
stats_.lb_healthy_panic_.inc();
hosts_source.source_type_ = HostsSource::SourceType::AllHosts;
return hosts_source;
if (fail_traffic_on_panic_) {
return absl::nullopt;
} else {
hosts_source.source_type_ = HostsSource::SourceType::AllHosts;
return hosts_source;
}
}

// If we're doing locality weighted balancing, pick locality.
Expand Down Expand Up @@ -586,10 +591,14 @@ ZoneAwareLoadBalancerBase::hostSourceToUse(LoadBalancerContext* context) {

if (isGlobalPanic(localHostSet())) {
stats_.lb_local_cluster_not_ok_.inc();
// If the local Envoy instances are in global panic, do not do locality
// based routing.
hosts_source.source_type_ = sourceType(host_availability);
return hosts_source;
// If the local Envoy instances are in global panic, and we should not fail traffic, do
// not do locality based routing.
if (fail_traffic_on_panic_) {
return absl::nullopt;
csssuf marked this conversation as resolved.
Show resolved Hide resolved
} else {
hosts_source.source_type_ = sourceType(host_availability);
return hosts_source;
}
}

hosts_source.source_type_ = localitySourceType(host_availability);
Expand Down Expand Up @@ -699,8 +708,11 @@ void EdfLoadBalancerBase::refresh(uint32_t priority) {
}

HostConstSharedPtr EdfLoadBalancerBase::chooseHostOnce(LoadBalancerContext* context) {
const HostsSource hosts_source = hostSourceToUse(context);
auto scheduler_it = scheduler_.find(hosts_source);
const absl::optional<HostsSource> hosts_source = hostSourceToUse(context);
if (!hosts_source) {
return nullptr;
}
auto scheduler_it = scheduler_.find(*hosts_source);
// We should always have a scheduler for any return value from
// hostSourceToUse() via the construction in refresh();
ASSERT(scheduler_it != scheduler_.end());
Expand All @@ -717,11 +729,11 @@ HostConstSharedPtr EdfLoadBalancerBase::chooseHostOnce(LoadBalancerContext* cont
}
return host;
} else {
const HostVector& hosts_to_use = hostSourceToHosts(hosts_source);
const HostVector& hosts_to_use = hostSourceToHosts(*hosts_source);
if (hosts_to_use.empty()) {
return nullptr;
}
return unweightedHostPick(hosts_to_use, hosts_source);
return unweightedHostPick(hosts_to_use, *hosts_source);
}
}

Expand Down Expand Up @@ -749,7 +761,12 @@ HostConstSharedPtr LeastRequestLoadBalancer::unweightedHostPick(const HostVector
}

HostConstSharedPtr RandomLoadBalancer::chooseHostOnce(LoadBalancerContext* context) {
const HostVector& hosts_to_use = hostSourceToHosts(hostSourceToUse(context));
const absl::optional<HostsSource> hosts_source = hostSourceToUse(context);
if (!hosts_source) {
return nullptr;
csssuf marked this conversation as resolved.
Show resolved Hide resolved
}

const HostVector& hosts_to_use = hostSourceToHosts(*hosts_source);
if (hosts_to_use.empty()) {
return nullptr;
}
Expand Down
3 changes: 2 additions & 1 deletion source/common/upstream/load_balancer_impl.h
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@ class ZoneAwareLoadBalancerBase : public LoadBalancerBase {
/**
* Pick the host source to use, doing zone aware routing when the hosts are sufficiently healthy.
csssuf marked this conversation as resolved.
Show resolved Hide resolved
*/
HostsSource hostSourceToUse(LoadBalancerContext* context);
absl::optional<HostsSource> hostSourceToUse(LoadBalancerContext* context);

/**
* Index into priority_set via hosts source descriptor.
Expand Down Expand Up @@ -300,6 +300,7 @@ class ZoneAwareLoadBalancerBase : public LoadBalancerBase {

const uint32_t routing_enabled_;
const uint64_t min_cluster_size_;
const bool fail_traffic_on_panic_;

struct PerPriorityState {
// The percent of requests which can be routed to the local locality.
Expand Down
59 changes: 59 additions & 0 deletions test/common/upstream/load_balancer_impl_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -539,6 +539,39 @@ TEST_P(FailoverTest, PriorityUpdatesWithLocalHostSet) {
EXPECT_EQ(tertiary_host_set_.hosts_[0], lb_->chooseHost(nullptr));
}

// Test that extending the priority set with an existing LB causes the correct updates when the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does 'an existing LB' refer to in this comment?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I inherited that particular phrasing from the comment on the previous test which I adapted for this PR. It looks to me as if it refers to checking the behavior of a load balancer before and after adding an additional host set.

// cluster is configured to disable on panic.
TEST_P(FailoverTest, PriorityUpdatesWithLocalHostSetDisableOnPanic) {
host_set_.hosts_ = {makeTestHost(info_, "tcp://127.0.0.1:80")};
failover_host_set_.hosts_ = {makeTestHost(info_, "tcp://127.0.0.1:81")};
common_config_.mutable_zone_aware_lb_config()->set_fail_traffic_on_panic(true);

init(false);
// With both the primary and failover hosts unhealthy, we should select no host.
EXPECT_EQ(nullptr, lb_->chooseHost(nullptr));

// Update the priority set with a new priority level P=2 and ensure the host
// is chosen
MockHostSet& tertiary_host_set_ = *priority_set_.getMockHostSet(2);
HostVectorSharedPtr hosts(new HostVector({makeTestHost(info_, "tcp://127.0.0.1:82")}));
tertiary_host_set_.hosts_ = *hosts;
tertiary_host_set_.healthy_hosts_ = tertiary_host_set_.hosts_;
HostVector add_hosts;
add_hosts.push_back(tertiary_host_set_.hosts_[0]);
tertiary_host_set_.runCallbacks(add_hosts, {});
EXPECT_EQ(tertiary_host_set_.hosts_[0], lb_->chooseHost(nullptr));

// Now add a healthy host in P=0 and make sure it is immediately selected.
host_set_.healthy_hosts_ = host_set_.hosts_;
host_set_.runCallbacks(add_hosts, {});
EXPECT_EQ(host_set_.hosts_[0], lb_->chooseHost(nullptr));

// Remove the healthy host and ensure we fail back over to tertiary_host_set_
host_set_.healthy_hosts_ = {};
host_set_.runCallbacks({}, {});
EXPECT_EQ(tertiary_host_set_.hosts_[0], lb_->chooseHost(nullptr));
}

// Test extending the priority set.
TEST_P(FailoverTest, ExtendPrioritiesUpdatingPrioritySet) {
host_set_.hosts_ = {makeTestHost(info_, "tcp://127.0.0.1:80")};
Expand Down Expand Up @@ -829,6 +862,32 @@ TEST_P(RoundRobinLoadBalancerTest, MaxUnhealthyPanic) {
EXPECT_EQ(3UL, stats_.lb_healthy_panic_.value());
}

// Test that no hosts are selected when fail_traffic_on_panic is enabled.
TEST_P(RoundRobinLoadBalancerTest, MaxUnhealthyPanicDisableOnPanic) {
hostSet().healthy_hosts_ = {makeTestHost(info_, "tcp://127.0.0.1:80"),
makeTestHost(info_, "tcp://127.0.0.1:81")};
hostSet().hosts_ = {
makeTestHost(info_, "tcp://127.0.0.1:80"), makeTestHost(info_, "tcp://127.0.0.1:81"),
makeTestHost(info_, "tcp://127.0.0.1:82"), makeTestHost(info_, "tcp://127.0.0.1:83"),
makeTestHost(info_, "tcp://127.0.0.1:84"), makeTestHost(info_, "tcp://127.0.0.1:85")};

common_config_.mutable_zone_aware_lb_config()->set_fail_traffic_on_panic(true);

init(false);
EXPECT_EQ(nullptr, lb_->chooseHost(nullptr));

// Take the threshold back above the panic threshold.
hostSet().healthy_hosts_ = {
makeTestHost(info_, "tcp://127.0.0.1:80"), makeTestHost(info_, "tcp://127.0.0.1:81"),
makeTestHost(info_, "tcp://127.0.0.1:82"), makeTestHost(info_, "tcp://127.0.0.1:83")};
hostSet().runCallbacks({}, {});

EXPECT_EQ(hostSet().healthy_hosts_[0], lb_->chooseHost(nullptr));
EXPECT_EQ(hostSet().healthy_hosts_[1], lb_->chooseHost(nullptr));

EXPECT_EQ(1UL, stats_.lb_healthy_panic_.value());
}

// Ensure if the panic threshold is 0%, panic mode is disabled.
TEST_P(RoundRobinLoadBalancerTest, DisablePanicMode) {
hostSet().healthy_hosts_ = {};
Expand Down