-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CT assignment - BGP routing policy (Interface/Shared IP endpoint) #972
Comments
Hi @pyspiotr, Thank you for opening this issue. I'm not completely clear on the problem you're facing. Would you please clarify whether you're having problems creating the Connectivity Template, or assigning it to an Application Point? |
I've just knocked together a quick config which creates a Connectivity Template with BGP+routing policy and attaches it to a leaf switch loopback interface. Maybe this helps? # Create the routing policy we'll use in our Connectivity Template
resource "apstra_datacenter_routing_policy" "rp" {
blueprint_id = local.blueprint_id
name = "my_rp"
}
# Create a Connectivity Template
resource "apstra_datacenter_connectivity_template_loopback" "ct" {
blueprint_id = local.blueprint_id
name = "foo"
bgp_peering_ip_endpoints = [
{
name = "bgp"
bfd_enabled = true
ipv4_address = "172.31.5.5"
routing_policies = [
{
name = "RP"
routing_policy_id = apstra_datacenter_routing_policy.rp.id
}
]
}
]
}
# Discover details (we need the ID) of a leaf switch
data "apstra_datacenter_system" "leaf1" {
blueprint_id = local.blueprint_id
name = "l2_virtual_001_leaf1"
}
# Discover the interface IDs of the leaf switch
data "apstra_datacenter_interfaces_by_system" "leaf1" {
blueprint_id = local.blueprint_id
system_id = data.apstra_datacenter_system.leaf1.id
}
# Attach the Connectivity Template to the "lo0.0" interface of the leaf switch
resource "apstra_datacenter_connectivity_template_assignments" "bgp" {
blueprint_id = local.blueprint_id
connectivity_template_id = apstra_datacenter_connectivity_template_loopback.ct.id
application_point_ids = [data.apstra_datacenter_interfaces_by_system.leaf1.if_map["lo0.0"]]
} |
Hello Chris, |
It looks like we don't have a data source to handle this situation, but I think it's a good idea. If you have any thoughts about what the syntax should look like, I'm happy to hear your suggestions. In the mean time, we can solve the problem with the graph query data source: # Create the routing policy we'll use in our Connectivity Template
resource "apstra_datacenter_routing_policy" "rp" {
blueprint_id = local.blueprint_id
name = "my_rp"
}
# Create a Connectivity Template
resource "apstra_datacenter_connectivity_template_loopback" "ct" {
blueprint_id = local.blueprint_id
name = "foo"
bgp_peering_ip_endpoints = [
{
name = "bgp"
bfd_enabled = true
ipv4_address = "172.31.5.5"
routing_policies = [
{
name = "RP"
routing_policy_id = apstra_datacenter_routing_policy.rp.id
}
]
}
]
}
# Discover details (we need the ID) of a leaf switch
data "apstra_datacenter_system" "leaf1" {
blueprint_id = local.blueprint_id
name = "l2_virtual_001_leaf1"
}
# Create a routing zone
resource "apstra_datacenter_routing_zone" "a" {
blueprint_id = local.blueprint_id
name = "RZ-A"
}
# Define a graph query string which walks the following graph traversal: RZ -> RZ_instantiation -> lo_interface -> leaf
locals {
rz_lo_system_query = replace(
<<-EOT
node(id='%s')
.out(type='instantiated_by')
.node(type='sz_instance')
.out(type='member_interfaces')
.node('interface', if_type='loopback', name='loopback')
.in_(type='hosted_interfaces')
.node(type='system', name='system')
EOT
, "\n", "")
}
# Perform a graph query to learn association between the RZ and leaf loopback interfaces
data "apstra_blueprint_query" "rz_a_leaf_a_loopback" {
blueprint_id = local.blueprint_id
query = format(local.rz_lo_system_query, apstra_datacenter_routing_zone.a.id)
}
# turn the graph query output into a map: leaf_id -> rz_specific_loopback_id
locals {
rz_a_loopback_map = zipmap(
[for i in jsondecode(data.apstra_blueprint_query.rz_a_leaf_a_loopback.result).items : i.system.id],
[for i in jsondecode(data.apstra_blueprint_query.rz_a_leaf_a_loopback.result).items : i.loopback.id]
)
}
# Attach the Connectivity Template to the appropriate interface of the leaf switch
resource "apstra_datacenter_connectivity_template_assignments" "bgp" {
blueprint_id = local.blueprint_id
connectivity_template_id = apstra_datacenter_connectivity_template_loopback.ct.id
application_point_ids = [local.rz_a_loopback_map[data.apstra_datacenter_system.leaf1.id]]
} |
Hi Chris, |
Control of per-RZ leaf loopback addresses was introduced in Apstra 5.0 (I think). We don't have Terraform support for this feature yet. I've noticed (pre 5.0) several customer configurations which used BGP peering from leaf switch SVIs, even when the peer was not directly attached to the SVI. Having said that, I'd be more comfortable peering from loopback. |
In a scenario where we have FW Cluster with common IP transfer subnets we need to use “Interface/Shared IP endpoint” for BGP sessions.
This can be done via GUI and Terraform but without Routing policy assignment. As soon as you want to use custom BGP routing policy you need to create separate CT with just that policy and assign it to dynamically created “Protocol Endpoint” – see attached screen.
That part can be done via GUI but cannot find a way to configure the same via terraform.
Apstra 4.2.1.1
Terraform provider: 0.76.1
The text was updated successfully, but these errors were encountered: