What is Host Routing?
The key points about the Host-Based Routing (HBR) feature in Cisco ACI:
Purpose: HBR supports host-based routing for optimal and symmetric traffic flow, making ACI deployment easier and more efficient.
Supported Hardware: It works with EX, FX, FX2, or later series hardware, suitable for Multi-site, Multi-pod, and inter-VRF/tenant deployments.
Cost Reduction: HBR removes the need for Giant OverLay Forwarding (GOLF) in specific scenarios, reducing deployment costs when only host routing is needed.
Simple Configuration: The feature is easy to configure in ACI with just one step.
Native Support for Routing Protocols: Border Leafs (BL) can now natively support HBR with routing protocols like iBGP, eBGP, OSPF, and EIGRP.
Handling Host Routes: BLs can manage large numbers of host routes (20,000 to 60,000 per BL), with control at the Bridge Domain (BD) level for enabling/disabling host routing.
Layer 3 Out (L3-Out): L3-Outs advertise host routes to WAN protocols, ensuring connectivity across broader networks.
HBR Design Flow
Non-Border Leaf Behavior:
Operates with its usual functionality; no changes are needed.
The COOP citizen (component that manages endpoint information) sends all Endpoint (EP) information to the Spine switches.
Spine Behavior:
Receives all EPs that match the host-route to facilitate communication between the Bridge Domain (BD) and Border Leaf (BL).
The Border Leaf (BL) publishes Host-Route for the BD.
Whenever an EP is added, deleted, or moved from Local to Remote (L2R) or Remote to Local (R2L):
A Host-Route to the Border Leaf is downloaded to update the network routing information accordingly.
HBR Configuration via GUI
To enable Host-Route , go to Tenant-->Networking-->Per BD-->Select Advertise Host Routes flag
Two Different methods are available once HBR configured under the BD
RSBDto L3out: Bind L3out with BD subnet to advertise outside
Route-map: Configure explicit route-map and select aggregate flag.
Method1
Step 1. Enable HBR on Per BD level
Step 2. Select appropriate BD-->Click Plus -->associate L3-out
HBR Verification via CLI
1. Checking EP on Non Border Leaf
Leaf3# show system internal epm endpoint ip 192.168.10.5MAC : a453.0e3d.d9a3 ::: Num IPs : 1IP# 0 : 192.168.10.5 ::: IP# 0 flags : host-tracked| ::: l3-sw-hit: Yes ::: flags2 :Vlan id : 18 ::: Vlan vnid : 9592 ::: VRF name : Tn-Cisco:V1BD vnid : 16580487 ::: VRF vnid : 2359296Phy If : 0x1a000000 ::: Tunnel If : 0Interface : Ethernet1/1
2. Checking EP (host-route) in Spine (no change in behaviour)
Spine1# show coop internal info repo ep key 16580487 a453.0e3d.d9a3Repo Hdr Checksum : 37375Repo Hdr record timestamp : 05 29 2024 02:45:21 470730503Repo Hdr last pub timestamp : 05 29 2024 02:45:21 472533155Repo Hdr last dampen timestamp : 01 01 1970 00:00:00 0Repo Hdr dampen penalty : 0Repo Hdr flags : IN_OBJ ACTIVEEP bd vnid : 16580487EP mac : A4:53:0E:3D:D9:A3
3. Checking HBR is enable on BD in BL
Leaf1# show coop internal host-route bridge-domainHost-Based Routing BD Details:bd-vnid:16580487, flags:0x1host-route: Enabled <<<<<<<<host-route record ts: 05 29 2024 03:21:52 10170968
4. Checking RIB on BL
Leaf1# show ip route vrf Tn-Cisco:V1<<output omitted>>192.168.10.0/24, ubest/mbest: 1/0, attached, direct, pervasive *via 10.0.72.65%overlay-1, [1/0], 00:37:11, static192.168.10.5/32, ubest/mbest: 1/0, pervasive *via , null0, [2/0], 00:12:07, coop, coop, tag 4294967295, redist-only <<<<<<<<192.168.20.0/24, ubest/mbest: 1/0, attached, direct *via 192.168.20.1, vlan7, [0/0], 00:43:03, direct192.168.20.1/32, ubest/mbest: 1/0, attached *via 192.168.20.1, vlan7, [0/0], 00:43:03, local, local
5. Check Route-map and prefix list on BL (just for info)
Since BD is host-route enabled, the Border Leaf switch downloads all the endpoints under the BD via the spine.
These Eps can include private subnets.
This route-map and Prefix-lists are used by COOP citizen to decide what routes to leak to URIB
Leaf1# show route-map | grep cooproute-map coop-ribleak-2359296, permit, sequence 1 <<<<<<<< ip address prefix-lists: IPv4-coop-ribleak-2359296-16580487 <<<<<<<<route-map coop-ribleak-2359296, deny, sequence 20000route-map exp-ctx-coop-bgp-2359296, deny, sequence 1route-map exp-ctx-coop-bgp-2359296, permit, sequence 15801route-map exp-ctx-coop-bgp-2359296, permit, sequence 15802route-map exp-ctx-coop-bgp-2359296, permit, sequence 15803
6. Checking HBR Mrouter record in spine
A COOP Citizen needs to inform the oracle about their interest in hosting a particular route on BD
In order to do this, the HBR utilizes the existing IGMP Mroutes feature
The HOST-Route flag is crucial in identifying whether or not a Backbone Leaf (BL) has published a host-route interest for a particular BD-VNID to the Oracle
The Spine learns about Endpoints (Eps) under BD-VNID and notifies all Host-Route enabled leaves about the Eps under that BD-VNID
Spine1# show coop internal info repo mrouter
Leaf tep ip : 10.0.32.66 <<<<<<<<gives advertising Leaf detailsLeaf Flags : 0x2 HOST_ROUTE <<<<<<<< HBR Flag
7. Checking EP in BL
Leaf1# show coop internal info repo ep key 16580487 a453.0e3d.d9a3MTS RX OK
Current citizen (publisher_id): 10.0.32.67 <<<<<<<<Publisher Oracle (Oracle_id): 10.0.32.65 <<<<<<<<Tunnel nh : 10.0.32.67
Real IPv4 EP : 192.168.10.5 <<<<<<<<
8. Checking IP-DB in BL
Leaf1# show coop internal info ip-dbIP address : 192.168.10.5Vrf : 2359296Flags : 0x40EP bd vnid : 16580487EP mac : A4:53:0E:3D:D9:A3
9. Cheking route in coop-urib in BL
Used for IPv4 and same applicable for IPv6
Leaf1# show coop internal host-route routes ipv4Leaf1# show coop internal host-route routes ipv4Host-Based IPv4 Routing Table for VRF: Tn-Cisco:V1 Route, BD-Vnid, Publisher-IP, URIB-Pending--------------------------------------------192.168.10.5, 16580487, 10.0.32.67,-------------------------------------------
Method 2
Step 1. Enable HBR on Per BD level
Step 2. Go to L3-out-->select Route map for import and export route control-->Default-export -->type-->Contexts Plus-->Give name-->Action (permit/Deny)-->Click Plus>Create match Rule for Route Map-->Give Name-->Click Plus-->Match Prefix-->enter IP details-->Select Aggregate
Cisco ACI GOLF Feature:
What is GOLF?
GOLF stands for Layer 3 EVPN Services for Fabric WAN.
It improves the efficiency and scalability of WAN connections in Cisco ACI networks.
How It Works:
Uses BGP EVPN protocol on top of OSPF for WAN routers connected to spine switches.
All tenant WAN connections share a single session on spine switches, reducing complexity.
Benefits:
Efficiency: Fewer BGP sessions needed, which simplifies network management.
Scalability: Better handles more tenants and connections without increasing configuration efforts.
Network Setup:
Spine Switches:
Configure Layer 3 subinterfaces on spine fabric ports to extend the network.
Do not support transit routing with shared services using GOLF.
Infra Tenant Configuration:
Create a Layer 3 external network (L3extOut) for GOLF on spine switches.
Include the following components:
LNodeP: No need for l3extInstP within L3Out.
Provider Label: Assign a label for the GOLF L3extOut.
Protocols: Set up OSPF and BGP policies.
Regular Tenant Configuration:
Use the physical connectivity defined by the infra tenant.
Set up their own L3extOut with:
l3extInstP (EPG): Includes subnets and contracts to manage routes and security.
Bridge Domain Subnet: Must advertise externally and share the same VRF as application EPG and GOLF L3Out EPG.
Contracts: Use explicit contracts to control communication between application EPG and GOLF L3Out EPG.
Label Matching:
Consumer Label (l3extConsLbl):
Must match the provider label in the infra tenant’s GOLF L3extOut.
Allows application EPGs from other tenants to use the external L3Out.
BGP EVPN Session:
The session in the provider L3extOut advertises tenant routes defined in this L3Out.
Key Points to Remember:
GOLF streamlines WAN connectivity by consolidating BGP sessions.
Simplifies network configuration and enhances scalability.
Requires specific setup in both infra and regular tenants for proper functionality.
Does not support transit routing with shared services.
Guidelines and Limitations for Cisco ACI GOLF Feature:
Route Advertisement Requirement:
GOLF routers must advertise at least one route to Cisco ACI to accept traffic.
No tunnels are created between leaf switches and external routers until Cisco ACI receives a route from them.
Supported Switches:
All Cisco Nexus 9000 Series ACI-mode switches support GOLF.
All Cisco Nexus 9500 platform ACI-mode switch line cards and fabric modules support GOLF.
From Cisco APIC release 3.1(x) onwards, the N9K-C9364C switch is also supported.
Provider Policy Limitation:
Only one GOLF provider policy can be deployed on spine switch interfaces for the entire fabric at this time.
Multipod Support:
Up to APIC release 2.0(2), GOLF is not supported with multipod.
In release 2.0(2), both features are supported together only on certain switches without "EX" at the end of their names (e.g., N9K-9312TX).
Since release 2.1(1), GOLF and multipod can be deployed together on all switches used in multipod and EVPN topologies.
Configuration Timing:
When setting up GOLF on a spine switch, wait for the control plane to stabilize before configuring another spine switch.
Multiple GOLF L3Outs on a Spine Switch:
A spine switch can connect to multiple GOLF outside networks (GOLF L3Outs).
Each GOLF L3Out must have a different provider label.
Each L3extOut must have a different OSPF area and use unique loopback addresses.
Route Advertisement in Infra Tenant:
The BGP EVPN session in the matching provider L3Out within the infra tenant advertises tenant routes defined in that L3Out.
Exporting Routes with Multiple GOLF Outs:
If deploying three GOLF Outs but only one has a provider/consumer label for GOLF and 0/0 export aggregation, the APIC will export all routes.
This behavior is similar to existing L3extOut configurations on leaf switches for tenants.
Direct Peering with DCI Router:
If a spine switch peers directly with a Data Center Interconnect (DCI) router:
Transit routes from leaf switches to the ASR have the next hop as the PTEP (Physical Tunnel Endpoint) of the leaf switch.
You must define a static route on the ASR for the TEP range of that ACI pod.
If the DCI is connected to the same pod via multiple links (dual-homed), ensure the administrative distance of the static route matches the route received through the other link.
BGP Peer Prefix Policy Limit:
The default bgpPeerPfxPol policy limits routes to 20,000.
For ACI WAN Interconnect peers, increase this limit as needed.
Deploying Two L3extOuts on One Spine Switch:
If you have two L3extOuts on one spine switch:
One with provider label prov1 peering with DCI 1.
Another peering with DCI 2 using provider label prov2.
If a tenant VRF uses a consumer label matching either prov1 or prov2, tenant routes will be sent out to both DCI 1 and DCI 2.
VRF Route Leaking Limitation:
When aggregating GOLF OpFlex VRFs:
Route leaking cannot occur within the ACI fabric or on the GOLF device between the GOLF OpFlex VRF and any other VRF.
Use an external device (not the GOLF router) for VRF leaking.
Important Note on MTU Settings:
No IP Fragmentation Support:
Cisco ACI does not support IP fragmentation.
MTU Configuration Recommendations:
When setting up Layer 3 Outside (L3Out) connections to external routers or Multi-Pod connections via an Inter-Pod Network (IPN):
Ensure the interface MTU is correctly configured on both ends of the link.
Platform Differences in MTU Settings:
Cisco ACI, Cisco NX-OS, and Cisco IOS:
The MTU value configured does not include Ethernet headers (excludes the 14-18 byte Ethernet header size).
A configured MTU of 9000 results in a maximum IP packet size of 9000 bytes.
Cisco IOS-XR:
The MTU value configured includes the Ethernet header.
A configured MTU of 9000 results in a maximum IP packet size of 8986 bytes for an untagged interface.
Recommendations:
Consult the specific configuration guides for the correct MTU values for each platform.
It is highly recommended to test the MTU settings using command-line interface (CLI) commands.
Example MTU Test Command:
On Cisco NX-OS CLI, you can test the MTU with the following command:
arduino
Copy code
ping 1.1.1.1 df-bit packet-size 9000 source-interface ethernet 1/1
Key Points to Remember:
GOLF routers must advertise routes to initiate traffic flow.
Only one GOLF provider policy is allowed per fabric at this time.
Wait for control plane convergence when configuring GOLF on multiple spine switches.
Different provider labels and OSPF areas are required when adding multiple GOLF L3Outs to a spine switch.
Be aware of MTU differences across platforms to prevent packet loss due to improper MTU settings.
Always test your MTU configurations to ensure network stability.
Using Shared GOLF Connections Between Multi-Site Sites
Overview:
In a Cisco ACI Multi-Site environment, multiple sites can share GOLF (Layer 3 EVPN Services for Fabric WAN) connections.
When Virtual Routing and Forwarding instances (VRFs) are stretched across these sites, careful configuration is needed to prevent cross-VRF traffic issues.
Guidelines to Avoid Cross-VRF Traffic Issues:
Route Target Configuration Between Spine Switches and DCI:
Two Methods to Configure EVPN Route Targets (RTs) for GOLF VRFs:
Manual RT:
Manually assign unique route targets to each VRF.
Auto RT:
Automatically generate route targets that include the Fabric ID.
Format: ASN:[FabricID]VNID
ASN: Autonomous System Number.
FabricID: Identifier for the ACI fabric/site.
VNID: Virtual Network Identifier for the VRF.
Synchronization of Route Targets:
Route targets are synchronized between ACI spine switches and Data Center Interconnects (DCIs) using the OpFlex protocol.
Potential Issue with Identical ASN and Fabric ID:
Problem Scenario:
Two sites have the same ASN and Fabric ID.
VRFs with the same VNIDs are deployed on both sites.
This leads to identical import/export route targets.
Consequence: Traffic intended for one VRF might be incorrectly routed to another VRF in a different site.
Example:
Site 1:
ASN: 100
Fabric ID: 1
VRF A:
VNID: 2000
Route Target: 100:[1]2000
VRF B:
VNID: 1000
Route Target: 100:[1]1000
Site 2:
ASN: 100
Fabric ID: 1
VRF A:
VNID: 1000
Route Target: 100:[1]1000
VRF B:
VNID: 2000
Route Target: 100:[1]2000
Issue:
VRF A in Site 1 and VRF B in Site 2 share the same route target (100:[1]2000).
This can cause traffic mixing between VRFs across sites.
Solution: Implementing Route Maps on the DCI:
Prevent Unwanted Route Propagation:
Challenge:
Without proper controls, EVPN routes from one site's GOLF spine could be sent to another site's GOLF spine via the DCI.
This happens because tunnels are not established across sites when transit routes are leaked through the DCI.
Common BGP Session Types Where This Occurs:
Scenario 1: Site1 — IBGP → DCI → EBGP → Site2
Scenario 2: Site1 — EBGP → DCI → IBGP → Site2
Scenario 3: Site1 — EBGP → DCI → EBGP → Site2
Scenario 4: Site1 — IBGP (Route Reflector Client) → DCI (Route Reflector) → IBGP → Site2
Using Route Maps with BGP Communities:
Purpose:
To control and filter the routes that are advertised between sites via the DCI.
Implementation Steps:
Assign BGP Communities:
On the DCI, apply inbound peer policies that tag routes received from each site's GOLF spine with a unique BGP community.
Filter Routes Between Sites:
Use outbound peer policies on the DCI to prevent routes with specific BGP communities from being advertised to other sites' GOLF spines.
This ensures that a site's routes are not sent to another site, preventing cross-VRF traffic.
Strip Communities When Sending to WAN:
When advertising routes towards the WAN, use a different outbound peer policy to remove the BGP community tags.
This allows the WAN to receive routes without the filtering communities attached.
Configuration Location:
All route maps and BGP community settings are applied at the peer level on the DCI for each BGP session.
Key Points to Remember:
Ensure Unique Route Targets Across Sites:
Use manual route targets or ensure that Fabric IDs and VNIDs are unique to prevent overlapping route targets.
Use Route Maps to Control Route Distribution:
Implement route maps with BGP communities on the DCI to filter routes between sites.
Monitor BGP Sessions and Configurations:
Be aware of the types of BGP sessions between the DCI and spine switches to properly apply route filtering.
Prevent Cross-VRF Traffic:
Proper configuration prevents traffic from one VRF in one site from reaching a different VRF in another site unintentionally.
Best Practices:
Regularly review and update configurations to accommodate any changes in the network topology.
Consider using different ASNs or Fabric IDs for different sites when possible.
Summary:
When sharing GOLF connections between multiple ACI sites in a Multi-Site topology, it's crucial to prevent cross-VRF traffic by ensuring route targets are unique and properly controlling route advertisements using route maps and BGP communities on the DCI. This careful configuration maintains network isolation between VRFs across sites and ensures secure and efficient network operations.
Configuring ACI GOLF Using the GUI
Overview: This guide walks you through setting up infrastructure GOLF (Layer 3 EVPN Services for Fabric WAN) services in Cisco ACI using the graphical user interface (GUI). These services allow any tenant network to utilize GOLF connections.
Procedure: Setting Up Infra GOLF Services
Step 1: Access the Infra Tenant
Navigate to Tenants:
Click on “Tenants” in the menu bar.
Select Infra Tenant:
Click on “infra” to choose the infrastructure tenant.
Step 2: Create a New L3Out
Expand Networking:
In the Navigation pane, expand the “Networking” section.
Start L3Out Creation:
Right-click on “L3Outs” and select “Create L3Out” to open the wizard.
Enter Basic Information:
Name: Provide a name for the L3Out.
VRF: Select the appropriate VRF.
L3 Domain: Enter the L3 Domain details.
Select GOLF Usage:
In the “Use For:” dropdown, choose “GOLF”.
Provider Label: Enter a label (e.g., “golf”).
Configure Route Targets:
Route Target Option:
Automatic: Enables automatic BGP route-target filtering for associated VRFs.
Explicit: Uses manually configured BGP route-target policies for VRFs.
Note: Choosing “Automatic” may cause BGP routing issues if explicit policies are also set.
Proceed:
Leave other settings as default (e.g., BGP selected).
Click “Next” to move to the next window.
Step 3: Configure Nodes and Interfaces
Select Spine Switch:
Node ID: Choose a spine switch from the dropdown list.
Set Router ID:
Router ID: Enter the router ID.
Optional Loopback Address:
Loopback Address:
Automatically filled with the Router ID.
To Change: Enter a different IP if you don't want to use the Router ID as the loopback address.
Leave Empty: If you prefer not to use a loopback address.
External Control Peering:
Ensure the “External Control Peering” box is checked.
Additional Configurations:
Fill in any other required fields based on your network setup.
Proceed:
Click “Next” to continue to the Protocols window.
Step 4: Set Up Protocols
BGP Configuration:
Peer Address: Enter the peer IP address.
EBGP Multihop TTL:
Set the TTL value (1–255).
Default: 0 (no TTL specified).
Remote ASN: Enter a unique Autonomous System Number (1–4,294,967,295).
Note: Do not use asdot or asdot+ formats.
OSPF Configuration:
Choose a default OSPF policy, an existing policy, or create a new OSPF Interface Policy.
Proceed:
Click “Next” to move to the External EPG window.
Step 5: Configure External EPG
Name the External Network:
Name: Provide a name for the external network.
Set Contracts:
Provided Contract: Enter the name of a provided contract.
Consumed Contract: Enter the name of a consumed contract.
Control Route Advertisement:
Allow All Subnets:
Unchecked: If you do not want to advertise all transit routes.
Subnets Area: Specify desired subnets and controls if unchecked.
Finish Configuration:
Click “Finish” to complete the L3Out setup.
Step 6: Configure Tenant to Use GOLF
Navigate to Tenant Networking:
In the Navigation pane, go to tenant_name > Networking > L3Outs.
Create Tenant L3Out:
Right-click on “L3Outs” and select “Create L3Out”.
Enter Basic Information:
Name: Provide a name for the tenant’s L3Out.
VRF: Select the appropriate VRF.
L3 Domain: Enter the L3 Domain details.
Enable GOLF Usage:
Check the box next to “Use for GOLF”.
Label Type: Select “Consumer” in the Label field.
Consumer Label: Assign the previously created provider label (e.g., “golf”).
Complete Setup:
Click “Next”, then “Finish” to finalize the tenant's L3Out configuration.
Distributing BGP EVPN Type-2 Host Routes to a DCIG
Background:
Before APIC Release 2.1(x):
ACI control plane only sent BGP EVPN Type-5 (IP Prefix) routes to the Data Center Interconnect Gateway (DCIG), which could lead to less efficient traffic forwarding.
From APIC Release 2.1(x) Onwards:
Fabric spines can also send BGP EVPN Type-2 (MAC-IP) host routes to improve traffic forwarding.
Steps to Enable Host Route Leak:
Enable Host Route Leak in BGP Policy:
When configuring the BGP Address Family Context Policy, ensure “Host Route Leak” is enabled.
Configure Host Routes in GOLF Setup:
Application Tenant Configuration:
Set the BGP Address Family Context Policy under the application tenant (the tenant that consumes GOLF services) instead of the infra tenant.
Single-Pod vs. Multi-Pod Fabrics:
Single-Pod Fabric:
Host route feature is optional.
If enabled, configure a Fabric External Connection Policy to leak the endpoint to BGP EVPN.
Multi-Pod Fabric:
Host route feature is required to avoid inefficient traffic forwarding.
Configure VRF Properties:
Add BGP Address Family Context Policy:
Apply it to the BGP Context Per Address Families for both IPv4 and IPv6.
Set Up BGP Route Target Profiles:
Define which routes can be imported or exported from the VRF
Key Points to Remember
Infra Tenant Setup:
Configure GOLF services in the infra tenant to make them available to all tenant networks.
L3Out Configuration:
Follow the wizard steps carefully to set up L3Out with GOLF usage.
Route Targets:
Choose between automatic or explicit BGP route-target filtering based on your network requirements.
Node and Interface Settings:
Assign correct node IDs and router IDs. Optionally, configure loopback addresses if needed.
BGP and OSPF Settings:
Properly configure BGP peer addresses, TTL, and Remote ASN.
Select or create appropriate OSPF policies.
External EPG Settings:
Define provided and consumed contracts.
Control which subnets are advertised through the L3Out.
Tenant Configuration:
Ensure tenants are correctly set to use the shared GOLF connections with the appropriate consumer labels.
Host Route Leak:
Enable and configure host route leak to improve traffic forwarding, especially in multi-pod setups.
Testing and Validation:
After configuration, verify that routes are correctly advertised and that traffic flows as expected.
Summary: By following these steps, you can successfully configure ACI GOLF services using the GUI, enabling efficient and scalable WAN connectivity for multiple tenant networks. Proper setup of route targets, BGP policies, and host route leak features ensures optimal network performance and traffic management across your ACI fabric
Distributing BGP EVPN Type-2 Host Routes to a DCIG Using the GUI
Overview: This guide explains how to distribute BGP EVPN Type-2 host routes to a Data Center Interconnect Gateway (DCIG) using the Cisco ACI GUI. This process enhances traffic forwarding efficiency in your network.
Before You Begin
Prerequisites:
ACI WAN Interconnect Services: Ensure these are already set up in the infra tenant.
Tenant Configuration: The tenant that will use these services must be properly configured.
Procedure: Enabling BGP EVPN Type-2 Host Routes Distribution
Step 1: Access the Infra Tenant
Navigate to Tenants:
Click on “Tenants” in the menu bar.
Select Infra Tenant:
Click on “infra” to open the infrastructure tenant settings.
Step 2: Locate BGP Policies
Expand Policies:
In the Navigation pane, go to “Policies”.
Navigate to BGP:
Click on “Protocol”, then select “BGP”.
Step 3: Create a BGP Address Family Context Policy
Initiate Policy Creation:
Right-click on “BGP Address Family Context”.
Select “Create BGP Address Family Context Policy” to open the policy creation window.
Configure the Policy:
Name the Policy:
Enter a name for the new policy.
Optionally, add a description for clarity.
Enable Host Route Leak:
Check the box labeled “Enable Host Route Leak”.
Submit the Policy:
Click “Submit” to save the new policy.
Step 4: Access the Consumer Tenant
Navigate to Tenants:
Click on “Tenants” in the menu bar.
Select Consumer Tenant:
Click on “tenant-name” (replace with your specific tenant name) to open the tenant's settings.
Expand Networking:
In the Navigation pane, expand “Networking”.
Step 5: Select the Relevant VRF
Navigate to VRFs:
Expand the “VRFs” section.
Choose the VRF:
Click on the VRF that will include the host routes you want to distribute.
Step 6: Configure VRF Properties
Edit VRF Settings:
When configuring the VRF properties, locate the BGP Context Per Address Families section.
Add the BGP Policy:
For IPv4:
Add the previously created BGP Address Family Context Policy to the IPv4 context.
For IPv6:
Similarly, add the same policy to the IPv6 context.
Step 7: Finalize the Configuration
Submit Changes:
Click “Submit” to apply the VRF property changes.
Key Points to Remember
Host Route Leak:
Enabling host route leak allows BGP EVPN Type-2 routes to be advertised to the DCIG, improving traffic forwarding efficiency.
Policy Placement:
The BGP Address Family Context Policy must be added under the application tenant (the tenant consuming the GOLF services) rather than the infra tenant.
Single-Pod vs. Multi-Pod Fabrics:
Single-Pod Fabric: Host route feature is optional but requires a Fabric External Connection Policy if enabled.
Multi-Pod Fabric: Host route feature is necessary to ensure optimal traffic forwarding.
BGP Configuration:
Ensure that Autonomous System Numbers (ASNs) and other BGP settings are correctly configured to prevent routing issues.
Summary
By following these steps, you can successfully enable the distribution of BGP EVPN Type-2 host routes to a DCIG using the Cisco ACI GUI. This configuration enhances your network’s traffic forwarding capabilities, especially in multi-pod fabric setups. Always verify your configurations and test to ensure that routes are correctly advertised and traffic flows as expected.
Reference :
Commentaires