Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FRR]Adding BGPD optimization during peer going down #21751

Merged
merged 5 commits into from
Mar 12, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
From f63a4be085b28c5138b95d55681f2bfb38bdaf4f Mon Sep 17 00:00:00 2001
From: Donald Sharp <sharpd@nvidia.com>
Date: Fri, 24 Jan 2025 15:04:13 -0500
Subject: [PATCH] bgpd: Optimize evaluate paths for a peer going down

Currently when a directly connected peer is going down
BGP gets a call back for nexthop tracking in addition
the interface down events. On the interface down
event BGP goes through and sets up a per peer Q that
holds all the bgp path info's associated with that peer
and then it goes and processes this in the future. In
the meantime zebra is also at work and sends a nexthop
removal event to BGP as well. This triggers a complete
walk of all path info's associated with the bnc( which
happens to be all the path info's already scheduled
for removal here shortly). This evaluate paths
is not an inexpensive operation in addition the work
for handling this is already being done via the
peer down queue. Let's optimize the bnc handling
of evaluate paths and check to see if the peer is
still up to actually do the work here.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>

diff --git a/bgpd/bgp_nht.c b/bgpd/bgp_nht.c
index 3b6db31ea0..196cc00385 100644
--- a/bgpd/bgp_nht.c
+++ b/bgpd/bgp_nht.c
@@ -1258,6 +1258,25 @@ void evaluate_paths(struct bgp_nexthop_cache *bnc)
}

LIST_FOREACH (path, &(bnc->paths), nh_thread) {
+ /*
+ * Currently when a peer goes down, bgp immediately
+ * sees this via the interface events( if it is directly
+ * connected). And in this case it takes and puts on
+ * a special peer queue all path info's associated with
+ * but these items are not yet processed typically when
+ * the nexthop is being handled here. Thus we end
+ * up in a situation where the process Queue for BGP
+ * is being asked to look at the same path info multiple
+ * times. Let's just cut to the chase here and if
+ * the bnc has a peer associated with it and the path info
+ * being looked at uses that peer and the peer is no
+ * longer established we know the path_info is being
+ * handled elsewhere and we do not need to process
+ * it here at all since the pathinfo is going away
+ */
+ if (peer && path->peer == peer && !peer_established(peer->connection))
+ continue;
+
if (path->type == ZEBRA_ROUTE_BGP &&
(path->sub_type == BGP_ROUTE_NORMAL ||
path->sub_type == BGP_ROUTE_STATIC ||
--
2.43.2

1 change: 1 addition & 0 deletions src/sonic-frr/patch/series
Original file line number Diff line number Diff line change
Expand Up @@ -61,5 +61,6 @@
0078-vtysh-de-conditionalize-and-reorder-install-node.patch
0079-staticd-add-support-for-srv6.patch
0080-SRv6-vpn-route-and-sidlist-install.patch
0081-bgpd-Optimize-evaluate-paths-for-a-peer-going-down.patch
0082-Revert-bgpd-upon-if-event-evaluate-bnc-with-matching.patch
0083-staticd-add-cli-to-support-steering-of-ipv4-traffic-over-srv6-sid-list.patch
Loading