top of page
Writer's pictureMukesh Chanderia

ACI Multi Site

Updated: Sep 27

Multi-Site Connectivity can be done through ISN.


The ISN between sites must support these specific functionalities:


System --> System Settings --> Control Plane --> 9000


MTU (Maximum Transmission Unit) in ISN and MP-BGP


  • Need for Increased MTU Support

    • The Inter-Site Network (ISN) must support a larger MTU size to handle VXLAN-encapsulated traffic between sites.

    • It's recommended to add an extra 100 bytes to your current MTU setting to accommodate the VXLAN overhead.

  • Example Configuration

    • If your endpoints support jumbo frames of 9000 bytes, or if your spine nodes generate packets of that size, set the ISN MTU to at least 9100 bytes.

  • Adjusting Control Plane MTU

    • You can change the default control plane MTU in each APIC domain.

    • Navigate to: System > System Settings > Control Plane MTU to modify this setting.


OSPF Support Between Spine Switches and ISN Routers


  • Requirement for OSPF Peering

    • Spine switches need to establish OSPF (Open Shortest Path First) peering with ISN devices.

  • Flexibility in ISN Routing

    • While OSPF is needed between spine switches and ISN devices, you don't have to use OSPF throughout the entire ISN.

    • The ISN can utilize other technologies like MPLS or even the Internet for connectivity.

Quality of Service (QoS) Considerations


  • Consistent QoS Policy Deployment

    • To ensure proper QoS treatment in the ISN, configure a QoS DSCP (Differentiated Services Code Point) marking policy on spine nodes.

    • This helps in consistent traffic prioritization across sites.

  • Configuring CoS-to-DSCP Mappings

    • Use the Cisco APIC at each site to set up these mappings.

    • Navigate to: Tenant > infra > Policies > Protocol > DSCP class-cos translation policy for L3 traffic.


Spine Interfaces and VLAN Configuration

  • Fixed VLAN for Spine Interfaces

    • Spine interfaces connect to ISN devices using point-to-point routed subinterfaces.

    • These subinterfaces use a fixed VLAN ID of 4.


Handling BUM Traffic in ACI MultiSite Deployments


  • No Need for Multicast Support in ISN

    • In ACI MultiSite setups, the ISN doesn't need to support multicast for BUM (Broadcast, Unknown Unicast, and Multicast) traffic.

  • Use of Headend Replication

    • ACI uses headend replication for flood traffic, meaning packet copies are created at the destination site.

    • This method simplifies the process compared to other approaches.

  • Simpler Than Multi-Pod Deployments

    • Unlike ACI Multi-Pod deployments that require Bidirectional PIM (Protocol Independent Multicast) support, MultiSite is less complex in this regard.


Spine Switch Hardware Requirements

  • Use of Cisco Nexus EX or Newer Platforms

    • Spine switches should be from the Cisco Nexus EX series or newer.

    • These models can perform namespace translation at line rate, avoiding performance issues during inter-site communication.


Back-to-Back Spine Topology Support

  • Supported from Cisco ACI Release 3.2(1)

    • Starting with this release, you can directly connect spine switches from two sites in a back-to-back topology.

    • This setup provides an alternative connectivity option between sites.


ISN Control Plane in Cisco ACI Multi-Site


Purpose of OSPF in ISN:

  • OSPF (Open Shortest Path First) is used to exchange routing information between spine switches at different sites.

  • It facilitates communication for specific IP addresses defined on the spine switches.


Key Components


  1. BGP-EVPN Router-ID (EVPN-RID):

    • A unique IP address assigned to each spine node within a fabric.

    • Used to establish MP-BGP EVPN (Multiprotocol BGP Ethernet VPN) connections with spine nodes in remote sites.

    • The same EVPN-RID is used for both Multi-Pod and Multi-Site configurations within the same site.

  2. Overlay Unicast TEP (O-UTEP):

    • An anycast IP address shared by all spine nodes in a pod at the same site.

    • Used as the source and destination for unicast VXLAN data-plane traffic.

    • Facilitates unicast communication between sites.

  3. Overlay Multicast TEP (O-MTEP):

    • An anycast IP address shared by all spine nodes at the same site.

    • Used for headend replication of BUM (Broadcast, Unknown Unicast, Multicast) traffic.

    • Traffic originates from the local O-UTEP and is sent to the O-MTEP of remote sites where the bridge domain is extended.

    • Assigned uniquely per site.


Important Notes


  • Global Routability:

    • The EVPN-RID, O-UTEP, and O-MTEP IP addresses must be globally routable.

    • These are the only prefixes that need to be exchanged between sites to enable the intersite MP-BGP EVPN control plane and VXLAN data plane.


  • TEP Pool Prefixes:

    • Each site uses its own TEP (Tunnel Endpoint) pool (e.g., TEP pool 1 for Site 1, TEP pool 2 for Site 2).

    • TEP pool prefixes do not need to be exchanged between sites for intersite communication.

    • Recommendation: Do not assign overlapping TEP pools across different sites to prepare for future features that might require TEP pool prefix exchanges.

    • Best Practice: Filter out TEP pool prefixes on the first ISN device to prevent them from entering the ISN network and potentially conflicting with existing backbone address spaces.


Establishing MP-BGP EVPN Adjacencies

  • Spine nodes in different fabrics use the EVPN-RID addresses to establish MP-BGP EVPN adjacencies.

  • Both MP-IBGP (Multiprotocol Internal BGP) and MP-EBGP (Multiprotocol External BGP) sessions are supported.

    • Depends on the BGP Autonomous System (AS) configuration of each site.


BGP Session Deployment Options


  1. Using MP-EBGP (External BGP):

    • Full Mesh Required: Each spine switch connected to the ISN must establish EVPN peerings with all remote spine switches.

    • Suitable when sites are in different BGP AS numbers.

  2. Using MP-IBGP (Internal BGP):

    • Options:

      • Full Mesh: All spine switches peer with each other.

      • Route Reflectors: Introduce route-reflector nodes to simplify the peering topology.

        • Route reflectors should be placed in different sites for resiliency.

        • They peer with each other and with all spine nodes.


Intersite L3Out Feature


  • Available from APIC Release 4.2(1):

    • Allows external routes learned from L3Outs to be exchanged between sites.

    • Enhances intersite communication by sharing more routing information.

    • Known as Intersite L3Out.


Configuring Intersite Connectivity Using Cisco Nexus Dashboard and Orchestrator


Step-by-Step Guide

  1. Add Sites in Cisco Nexus Dashboard (ND):

    • Navigate to Sites in the left menu.

    • Click Actions > Add Site.

    • Select Site Type: ACI or Cloud ACI.

    • Enter the Cisco APIC's IP address, user credentials, and a unique Site ID.

  2. Automatic Data Retrieval:

    • ND automatically imports spine switch data from the registered site.

    • Populates the BGP AS number with the site's ACI MP-BGP AS number.

  3. Access Nexus Dashboard Orchestrator (NDO):

    • From ND's Services page, open Nexus Dashboard Orchestrator.

    • Automatic login using ND user credentials.

  4. Manage Sites in NDO:

    • Go to Infrastructure > Sites.

    • Change the State from Unmanaged to Managed for each fabric you want NDO to manage.

  5. Configure Fabric Connectivity Infrastructure:

    • In NDO, select Infrastructure > Infra Configuration.

    • Click Configure Infra to start the setup.

  6. Set General BGP Settings:

    • In Fabric Connectivity Infra, click General Settings.

    • Specify Control Plane BGP parameters:

      • BGP Peering Type: Choose Full-Mesh or Route Reflector.

      • Keepalive Interval: Set in seconds.

      • Hold Interval: Set in seconds.

      • Stale Interval: Set in seconds.

      • Other relevant settings as needed.

  7. Configure Specific Site Settings:

    • In Fabric Connectivity Infra, click on the specific site to access configuration levels:

      • Site Level

      • Pod Level

      • Spine Level

    At Site Level:

    • Enable Multi-Site:

      • Turn on the ACI Multi-Site option.

    • Overlay Multicast TEP (O-MTEP):

      • Enter the O-MTEP IP address for the site.

    • BGP Autonomous System Number:

      • Enter or modify the site's BGP AS number.

    • External Router Domain:

      • Select an external router domain previously created in the APIC UI (used for standard L3Outs).

    • Underlay Configuration:

      • Set OSPF Area ID.

      • Choose OSPF Area Type.

      • Define OSPF Policies.

    At Pod Level:

    • Overlay Unicast TEP (O-UTEP):

      • Enter the O-UTEP IP address for the pod.

    At Spine Level:

    • Configure OSPF Ports:

      • Click Add Port to set up OSPF connections towards the ISN.

    • Enable BGP Peering (Optional):

      • Turn on BGP Peering.

      • Enter the EVPN-RID IP address for the spine switch.

      • The spine will attempt to peer with other spines that have BGP peering enabled.

    • Set as Route Reflector (Optional):

      • Enable Spine is Route Reflector if the spine acts as a BGP route reflector.

  8. Deploy Configuration:

    • Click Deploy to apply all configurations.




Summary


By following these steps, you set up the ISN control plane for Cisco ACI Multi-Site deployments:

  • Establish OSPF routing between spine switches and the ISN.

  • Configure BGP-EVPN adjacencies using EVPN-RID addresses.

  • Ensure that key IP addresses (EVPN-RID, O-UTEP, O-MTEP) are globally routable and properly configured.

  • Manage sites and configurations using Cisco Nexus Dashboard and Nexus Dashboard Orchestrator.

  • Customize settings at the site, pod, and spine levels for precise control over the network behavior.

  • Utilize features like Intersite L3Out to enhance intersite routing capabilities.


Best Practices:


  • Avoid Overlapping TEP Pools:

    • Assign unique TEP pools to each site to prevent future conflicts.

  • Filter TEP Pool Prefixes:

    • Prevent TEP pool prefixes from entering the ISN to avoid address conflicts.

  • Place Route Reflectors Strategically:

    • Distribute route reflectors across different sites for better resiliency.

  • Consistent Configuration:

    • Ensure all spine switches and sites have consistent settings for smooth operation.



MultiSite Overlay Control Plane



Sequence of Events for Exchanging Host Information Across Sites:

  1. Endpoints Connect to Their Respective Sites:

    • EP1 connects to Site 1.

    • EP2 connects to Site 2.


  2. Local Learning of Endpoints:

    • Leaf nodes at each site detect their connected endpoints.

    • These leaf nodes send a COOP (Council of Oracle Protocol) message with the endpoint information to their local spine nodes.


  3. Spine Nodes Learn Local Endpoints:

    • Spine nodes at each site now know about the endpoints connected to their leaf nodes.

    • At this point, endpoint information is not shared between sites because there is no policy permitting communication between EP1 and EP2.


  4. Defining an Intersite Policy:

    • An intersite policy is created using the Cisco MultiSite Orchestrator.

    • This policy is pushed to and implemented at both sites.


  5. Exchanging Endpoint Information Across Sites:

    • The intersite policy triggers Type-2 EVPN (Ethernet Virtual Private Network) updates between the sites.

    • Host route information for EP1 and EP2 is exchanged across sites.

    • This information is associated with the O-UTEP (Overlay Unicast Tunnel Endpoint) address, which uniquely identifies each site.


  6. Moving Endpoints Within the Same Site:

    • If you move an endpoint (e.g., EP1) between leaf nodes within Site 1:

      • Spine nodes do not generate new EVPN updates because the endpoint remains within the same site.

      • The O-UTEP address stays the same, so no intersite update is needed.


  7. Moving Endpoints to a Different Site:

    • If an endpoint moves from Site 1 to Site 2:

      • Spine nodes will generate new EVPN updates to reflect the endpoint's new location.

      • The endpoint information is now associated with the O-UTEP address of the new site.


  8. Synchronizing EVPN Information Locally:

    • The received MP-BGP EVPN (Multiprotocol Border Gateway Protocol Ethernet VPN) information is shared with other local spine nodes that aren't directly peered over BGP.

    • This synchronization is done using COOP within the site.


Key Points to Remember:


  • No Endpoint Sharing Without Policy:

    • Endpoint information isn't shared between sites unless an intersite policy allows it.


  • Role of Intersite Policy:

    • The policy enables the exchange of endpoint information by triggering EVPN updates.


  • O-UTEP Addresses:

    • Each site has a unique O-UTEP address.

    • Endpoint information is always linked to the O-UTEP of the site where it's located.


  • Efficiency in Updates:

    • Moving endpoints within the same site doesn't trigger intersite updates, reducing unnecessary control-plane traffic.


  • COOP Synchronization:

    • Ensures all spine nodes within a site have consistent endpoint information, even if they're not directly peered via BGP.


MultiSite Overlay Data Plane


1. Control Plane Using MP-BGP EVPN

  • Purpose of MP-BGP EVPN:

    • Cisco ACI MultiSite uses Multiprotocol BGP Ethernet VPN (MP-BGP EVPN) as the control plane protocol.

    • Allows spine switches in different sites to exchange endpoint information.

    • Enables east-west communication between separate fabrics (sites).


2. Data Plane Using VXLAN

  • Role of VXLAN:

    • After exchanging endpoint information, VXLAN (Virtual Extensible LAN) is used as the data plane.

    • Facilitates Layer 2 and Layer 3 communication between sites.

    • Ensures seamless connectivity for endpoints across different locations.


3. Conditions for Sharing Endpoint Information Across Sites


  • Endpoint Information Sharing:

    • Endpoints are shared across sites only when configured in the Nexus Dashboard Orchestrator (NDO).

    • Two main conditions must be met for endpoints to communicate across sites:

    a. Endpoint in a Stretched EPG:

    • The endpoint belongs to an EPG (Endpoint Group) that is stretched across multiple sites.

    • Stretched EPGs are configured to exist identically in more than one site.

    b. Endpoint in Non-Stretched EPG with a Contract:

    • The endpoint is in a non-stretched EPG.

    • There is a contract in place that allows communication with an EPG in another site.

    • Contracts define the policies that permit or deny traffic between EPGs.

4. Sharing of Endpoints with IP Addresses


  • Automatic Sharing via MP-BGP EVPN:

    • Endpoints that have IP addresses are shared across sites when the above conditions are met.

    • Shared using the MP-BGP EVPN control plane.

    • Enables intersite communication for endpoints with Layer 3 identities.


5. Handling Endpoints Without IP Addresses


  • Default Behavior:

    • Endpoints without IP addresses (e.g., devices that only have a MAC address) are not shared across sites by default.

    • These are typically Layer 2 endpoints that do not participate in IP communication.

  • Enabling Layer 2 Stretch:

    • To share these Layer 2 endpoints across sites, you must enable Layer 2 Stretch in the NDO.

    • Layer 2 Stretch allows the same subnet to span multiple sites.

    • Facilitates communication for devices that rely solely on Layer 2 connectivity.




Summary:


  • Cisco ACI MultiSite leverages MP-BGP EVPN for control plane functions and VXLAN for the data plane.

  • Endpoint information is only shared across sites when specific conditions are met and configured in the Nexus Dashboard Orchestrator.

  • Endpoints with IP addresses are shared when they belong to stretched EPGs or have contracts permitting intersite communication.

  • Endpoints without IP addresses require Layer 2 Stretch to be enabled to be shared across sites.



Detailed Steps of How MP-BGP EVPN Shares Endpoint Information Across Sites:




  • Endpoints Connect to Their Respective Sites:

    • Endpoint EP1 connects to Site 1.

    • Endpoint EP2 connects to Site 2.


  • Local Learning by Leaf Nodes:

    • The leaf nodes at each site detect their connected endpoints (EP1 or EP2).

    • They send the endpoint information to their local spine nodes.


  • Spine Nodes Know Local Endpoints but Don't Share Them Yet:

    • Spine nodes at each site now have information about their locally connected endpoints.

    • However, this information is not exchanged between sites because there is no policy allowing communication between the EPGs (Endpoint Groups) of EP1 and EP2.


  • Defining an Intersite Policy:

    • An intersite policy is created using the Cisco MultiSite Orchestrator.

    • This policy is deployed (pushed) to both Site 1 and Site 2.

    • The policy allows communication between the EPGs of EP1 and EP2.


  • Triggering Type-2 EVPN Updates:

    • The intersite policy triggers Type-2 EVPN (Ethernet VPN) updates between the sites.

    • These updates exchange host route information for EP1 and EP2 across sites.

    • The endpoint information is linked to the O-UTEP (Overlay Unicast Tunnel Endpoint) address that identifies each site.


  • Endpoint Movement Within the Same Site Doesn't Trigger New Updates:

    • If you move an endpoint (e.g., EP1) to a different leaf node within the same site:

      • The spine nodes do not generate new EVPN updates.

      • This is because the endpoint is still associated with the same O-UTEP address of that site.

    • New EVPN updates are only generated if the endpoint moves to a different site.


MultiSite Overlay Data Plane - BUM Traffic Between Sites


  1. VXLAN Tunnels Create Logical Layer 2 Domains Across Sites:

    • VXLAN (Virtual Extensible LAN) tunnels are established between endpoints located in different sites.

    • These tunnels traverse the Inter-Site Network (ISN), which may consist of multiple Layer 3 hops.

    • The tunnels allow endpoints to communicate as if they are on the same Layer 2 network, despite being physically apart.


  2. End-to-End Layer 2 BUM Traffic Communication:

    • Endpoints can send Broadcast, Unknown unicast, and Multicast (BUM) frames to other endpoints on the same Layer 2 segment.

    • This communication occurs regardless of the endpoints' physical locations.


  3. No Need for Multicast Support in the ISN:

    • Cisco ACI MultiSite uses ingress replication and headend replication for handling BUM frames.

    • Because of these methods, the ISN does not need to support multicast routing.


  4. Ingress Replication at the Source Site:

    • The spine node at the source site copies the BUM frame.

    • It creates one copy for each destination site where the Layer 2 domain is extended.

    • These copies are sent towards the Overlay Multicast Tunnel Endpoint (O-MTEP) of each site.


  5. BUM Frames Treated as Unicast Traffic Over ISN:

    • The BUM frame copies are encapsulated with the O-MTEP address, which is a unicast IP.

    • As a result, the ISN handles these packets as unicast traffic, simplifying network requirements.


  6. Headend Replication at the Destination Site:

    • When the BUM frame reaches the destination site, the local spine node replicates it.

    • The replicated frames are then flooded within the site to reach all relevant endpoints.


  7. Enabling Intersite BUM Traffic:

    • To allow BUM traffic between sites, you must enable "Intersite BUM Traffic Allow" on the bridge domain.

    • This setting permits the forwarding of BUM frames across different sites using the methods above.


Enabling Flood Traffic Across Sites:


  • When you have a bridge domain that is stretched across multiple sites and you have either:

    • ARP Flooding enabled, or

    • Layer 2 Unknown Unicast set to flood,

  • You need to enable Intersite BUM Traffic Allow on the bridge domain.

  • This setting ensures that flood traffic can be sent to other sites as well.



Types of Layer 2 BUM Traffic Forwarded Across Sites:


  1. Layer 2 Broadcast Frames (B):

    • These frames are always forwarded across sites when Intersite BUM Traffic Allow is enabled.

    • Exception – ARP Requests:

      • ARP requests are only flooded if ARP Flooding is enabled in the bridge domain, regardless of the MultiSite configuration.

      • So, to flood ARP requests across sites, make sure ARP Flooding is turned on.

  2. Layer 2 Unknown Unicast Frames (U):

    • These frames are flooded only when Layer 2 Unknown Unicast is set to flood in the bridge domain settings.

    • This behavior is the same whether you're using MultiSite or not.

    • When Intersite BUM Traffic Allow is enabled, you can adjust the flooding or proxy mode for unknown unicast traffic through the Nexus Dashboard Orchestrator (NDO).


  3. Layer 2 Multicast Frames (M):

    • The forwarding behavior is the same for:

      • Intra-bridge-domain Layer 3 multicast frames:

        • Source and receivers might be in the same or different IP subnets but are part of the same bridge domain.

      • True Layer 2 multicast frames:

        • Packets where the destination MAC address is multicast and there's no IP header.


    • In both cases, when the bridge domain is stretched across sites and Intersite BUM Traffic Allow is enabled, this multicast traffic is forwarded to other sites.


Visualizing the Traffic Flow:

  • Layer 2 BUM Traffic Flow Across Sites:

    • Imagine a diagram showing BUM traffic (Broadcast, Unknown unicast, Multicast) moving from one site to another over a stretched bridge domain.

    • With Intersite BUM Traffic Allow enabled, this traffic flows seamlessly across sites.


Key Takeaways:


  • Enable Intersite BUM Traffic Allow:

    • To allow flood traffic to be sent between sites in a stretched bridge domain, you must enable this setting.


  • Control ARP Flooding Separately:

    • ARP requests require ARP Flooding to be enabled in the bridge domain to be flooded across sites.


  • Adjust Unknown Unicast Behavior:

    • You can manage how unknown unicast traffic is handled (flooded or proxied) via the bridge domain settings and NDO when Intersite BUM Traffic Allow is on.


  • Multicast Traffic Is Forwarded:

    • Both Layer 2 and certain Layer 3 multicast frames are forwarded across sites when the necessary settings are enabled.




Layer 2 BUM Frame Flow Across Sites:


  1. Endpoint Generates BUM Frame:

    • EP1, an endpoint within a specific bridge domain, creates a Layer 2 BUM (Broadcast, Unknown unicast, Multicast) frame.


  2. Leaf Node Decides to Flood Traffic:

    • Based on the type of BUM frame and the bridge domain settings, the leaf node may need to flood the traffic within the bridge domain.


  3. Encapsulation and Local Distribution:

    • The leaf node encapsulates the frame using VXLAN.

    • It sends the frame to the Group IP Address Outer (GIPo) associated with the bridge domain.

    • The frame travels along multicast trees to reach all other leaf and spine nodes within the same site.


  4. Election of Designated Forwarder:

    • Among the spine nodes connected to the Inter-Site Network (ISN), one is elected as the designated forwarder for that bridge domain.

    • This election happens using the IS-IS protocol.


  5. Replicating BUM Frames to Remote Sites:

    • The designated forwarder replicates the BUM frame for each remote site where the bridge domain is stretched.

    • It sends these copies to the remote sites.


  6. Using O-MTEP and O-UTEP Addresses:

    • The destination IP address in the VXLAN-encapsulated packet is the Overlay Multicast Tunnel Endpoint (O-MTEP) of each remote site.

    • The source IP address is the anycast Overlay Unicast Tunnel Endpoint (O-UTEP) address of the local site.


  7. Processing at the Remote Site:

    • A spine node at the remote site receives the packet.

    • It translates the VNID (Virtual Network Identifier) in the header to the local VNID for that bridge domain.

    • The spine node then forwards the traffic within the site using local multicast trees.


  8. Flooding Within the Remote Site:

    • The traffic is forwarded to all spine and leaf nodes that have endpoints connected to the bridge domain in the remote site.


  9. Leaf Nodes Forward to Local Endpoints:

    • Leaf nodes use the VXLAN header to learn the location of EP1.

    • They forward the BUM frame to their local interfaces associated with the bridge domain.

    • This allows EP2 (an endpoint in the remote site) to receive the frame.


  10. Shared GIPo Addresses Across Bridge Domains:

    • Multiple bridge domains in the same site may share the same GIPo address.


  11. Unnecessary BUM Traffic Across Sites:

    • If Intersite BUM Traffic Allow is enabled for one bridge domain, BUM traffic for other bridge domains sharing the same GIPo may also be sent across sites.

    • This unnecessary traffic is dropped at the destination spine nodes but still consumes bandwidth.


  12. Increased Bandwidth Usage:

    • Sending extra BUM traffic can increase bandwidth utilization in the ISN.


  13. Assigning Unique GIPo Addresses:

    • To prevent this, when you enable Intersite BUM Traffic Allow from the Cisco Nexus Dashboard Orchestrator (NDO), the bridge domain is assigned a unique GIPo address from a separate multicast address range.


  14. Optimize WAN Bandwidth Flag:

    • This setting is controlled by the "Optimize WAN Bandwidth" flag in the NDO user interface.

    • The flag is enabled by default for bridge domains created in NDO.


  15. Manual Configuration for Imported Bridge Domains:

    • If a bridge domain is imported from an APIC, the Optimize WAN Bandwidth flag is disabled by default.

    • You need to manually enable it.

    • Enabling it will change the GIPo address, causing a brief outage (a few seconds) while the new address updates across all leaf nodes.


  16. Three Key Bridge Domain Configurations in ACI MultiSite:

    • Layer 2 Stretch:

      • Shares Layer 2 endpoint information across sites, in addition to Layer 3 endpoints.

    • Intersite BUM Traffic Allow:

      • Forwards BUM traffic across sites.

    • Optimize WAN Bandwidth:

      • Allocates a unique GIPo to the bridge domain from a reserved range.

      • Prevents unnecessary intersite BUM traffic due to multiple bridge domains sharing the same GIPo.


  17. Recommendation on Configuration Management:

    • Although you can change these MultiSite-specific configurations via the APIC at each site, it's recommended to manage them through the Nexus Dashboard Orchestrator (NDO).

    • This ensures consistency and that NDO has full visibility of the configurations.



Summary:


  • BUM Frame Flow Across Sites:

    • EP1 generates a BUM frame.

    • Leaf nodes decide to flood based on settings.

    • Frame is VXLAN-encapsulated and sent to local GIPo.

    • Designated forwarder spine sends copies to remote sites using O-MTEP addresses.

    • Remote spines translate VNIDs and flood within their site.

    • Leaf nodes deliver frames to local endpoints like EP2.


  • Avoiding Unnecessary Traffic:

    • Shared GIPo addresses can cause extra BUM traffic.

    • Assigning unique GIPo addresses per bridge domain optimizes bandwidth.

    • Use the Optimize WAN Bandwidth flag in NDO to manage this.


  • Best Practices:

    • Use NDO for configuration changes to maintain consistency.

    • Be cautious when enabling Optimize WAN Bandwidth on imported bridge domains due to potential brief outages.



In the APIC, the configurations are reflected (and could be modified) in the Advanced/Troubleshooting tab of the bridge domain.



Unicast Communication Between Sites


1. ARP Exchange is Necessary for Communication:

  • Purpose of ARP:

    • Before devices (endpoints) in different sites can communicate within the same subnet, they must complete an ARP (Address Resolution Protocol) exchange.

    • ARP resolves IP addresses to MAC addresses, enabling devices to locate each other on the network.


2. ARP Handling Depends on Bridge Domain Settings:

  • In Cisco ACI, how ARP requests are managed is determined by the ARP Flooding setting within a bridge domain.

  • There are two scenarios based on whether ARP flooding is enabled or disabled.


Scenario 1: ARP Flooding is Enabled

  • Behavior:

    • ARP requests are treated as broadcast traffic.

    • These requests are flooded throughout the bridge domain, including across different sites.

  • Requirement:

    • You must enable Intersite BUM Traffic Allow in the same bridge domain.

      • BUM stands for Broadcast, Unknown unicast, and Multicast traffic.

      • This setting allows broadcast traffic (like ARP requests) to be sent between sites.

  • Default Configuration:

    • When using the Nexus Dashboard Orchestrator (NDO) to create stretched bridge domains, ARP Flooding and Intersite BUM Traffic Allow are enabled by default.

    • This ensures that ARP requests are properly flooded across all sites.


Scenario 2: ARP Flooding is Disabled

  • Behavior:

    • ARP requests are handled as routed unicast packets instead of broadcasts.

    • The requests are sent directly to the destination without flooding the network.

  • Impact on Intersite BUM Traffic:

    • Since ARP requests are unicast, the Intersite BUM Traffic Allow setting does not affect them.

    • You do not need to enable Intersite BUM Traffic Allow for ARP requests to reach other sites.

  • Default Configuration:

    • When using NDO to create stretched bridge domains with Intersite BUM Traffic Allow disabled, ARP Flooding is also disabled by default.

    • This configuration reduces unnecessary broadcast traffic across sites.


Summary of Key Points:


  • ARP Exchange is Essential: Endpoints must complete an ARP exchange to communicate within the same subnet across sites.


  • Bridge Domain Settings Matter: The handling of ARP requests depends on whether ARP Flooding is enabled or disabled in the bridge domain.


  • Intersite BUM Traffic Allow:

    • Enabled ARP Flooding: Requires Intersite BUM Traffic Allow to be enabled to flood ARP requests across sites.

    • Disabled ARP Flooding: Intersite BUM Traffic Allow is irrelevant for ARP requests since they are sent as unicast.


  • Default Settings in NDO:

    • When Intersite BUM Traffic Allow is Enabled: ARP Flooding is enabled by default.

    • When Intersite BUM Traffic Allow is Disabled: ARP Flooding is disabled by default.


Explanation of ARP Request Flow Between Sites with ARP Flooding Disabled





Scenario: Endpoint EP1 in Site 1 wants to communicate with Endpoint EP2 in Site 2, and ARP flooding is disabled in the bridge domain.


Sequence of Steps:


  1. EP1 Generates an ARP Request:

    • EP1 needs to find the MAC address of EP2 to communicate.

    • It creates an ARP request asking, "Who has the IP address of EP2?"


  2. Local Leaf Node Inspects the ARP Request:

    • The leaf switch (the first network device EP1 connects to) receives the ARP request.

    • Since ARP flooding is disabled, the leaf switch does not broadcast the request.

    • Instead, it examines the ARP request to see the target IP address (EP2's IP).


  3. Checking for EP2's Information:

    • The leaf switch checks its own records to see if it already knows EP2's IP and MAC address.

    • If EP2's information is unknown, the leaf switch needs assistance to locate EP2.


  4. Sending the ARP Request to Local Spine Nodes:

    • The leaf switch encapsulates the ARP request into a VXLAN packet.

    • It sends this packet to the Proxy A anycast TEP address, which is shared by all local spine switches.

    • This is done based on the known subnet information for EP1 and EP2.


  5. Local Spine Node Processes the ARP Request:

    • One of the local spine switches receives the encapsulated ARP request.

    • The spine switch checks its COOP (Council of Oracle Protocol) database to see if it knows about EP2's IP address.

    • The COOP database contains information about endpoints learned through the network.


  6. Determining If EP2 Is Known:

    • If EP2's IP address is known in the COOP database (meaning EP2 is not a "silent host"):

      • The spine switch knows which remote site EP2 is connected to.

      • It knows the remote O-UTEP B address (Overlay Unicast Tunnel Endpoint) that identifies Site 2.


  7. Forwarding the ARP Request to Site 2:

    • The spine switch encapsulates the ARP request again for transport across the Inter-Site Network (ISN).

    • It updates the source IP address in the VXLAN header to use the local site's O-UTEP A address.

    • The ARP request is sent over the ISN to the remote site (Site 2).


    Note on Silent Hosts:

    • If EP2's IP address is not known (EP2 is a "silent host" that hasn't communicated yet):

      • Starting from Cisco ACI Release 3.2(1), the "Intersite ARP Glean" feature allows the ARP request to still be forwarded to Site 2.

      • This ensures that silent hosts can receive ARP requests, reply, and become known in the network.


  8. Remote Spine Node in Site 2 Receives the ARP Request:

    • A spine switch in Site 2 receives the VXLAN-encapsulated ARP request.

    • It translates the identifiers in the packet (like VNID and class ID) to values used locally.

    • The spine switch then forwards the ARP request to the appropriate leaf switch where EP2 is connected.


  9. Leaf Node in Site 2 Processes the ARP Request:

    • The leaf switch in Site 2 receives the packet and removes the VXLAN encapsulation.

    • It learns about EP1's location and class ID for future reference.

    • Since ARP flooding is disabled in Site 2 as well, the leaf switch directly forwards the ARP request to EP2.


  10. EP2 Responds with an ARP Reply:

    • EP2 receives the ARP request and now knows that EP1 is trying to communicate.

    • EP2 sends an ARP reply back to EP1, providing its MAC address.

    • The ARP reply is sent as a unicast message directly to EP1.


  11. Delivery of the ARP Reply to EP1:

    • The ARP reply follows a similar path in reverse:

      • From EP2 to the leaf switch in Site 2.

      • Through the spine switches and across the ISN to Site 1.

      • Delivered to EP1 via the leaf switch in Site 1.


Result:


  • EP1 and EP2 have successfully exchanged ARP information.

  • They now know each other's IP and MAC addresses.

  • Direct communication between EP1 and EP2 can proceed over the network.


Key Points to Remember:


  • ARP Flooding Disabled:

    • ARP requests are not broadcasted but handled intelligently by the network devices.

    • This reduces unnecessary traffic on the network.


  • Role of Spine and Leaf Switches:

    • Leaf switches handle direct connections to endpoints and initial processing.

    • Spine switches act as central points for routing and forwarding ARP requests between sites.


  • Intersite ARP Glean Feature:

    • Helps in situations where the remote endpoint is a silent host.

    • Ensures ARP requests reach all potential endpoints, even if they haven't communicated before.



Explanation of ARP Reply Flow from EP2 in Site 2 to EP1 in Site 1


  • EP2 Sends ARP Reply:

    • EP2 (Endpoint 2) in Site 2 sends an ARP reply back to EP1 in Site 1.

  • Leaf Node in Site 2 Encapsulates the ARP Reply:

    • The local leaf switch receives the ARP reply from EP2.

    • It encapsulates the ARP reply using VXLAN.

    • The packet is sent towards the remote O-UTEP A address, which represents Site 1.

  • Spine Nodes Modify Source IP Address:

    • The spine switches in Site 2 process the encapsulated packet.

    • They rewrite the source IP address in the VXLAN header to use the local O-UTEP B address, identifying Site 2.

    • This ensures that the packet appears to come from Site 2 when it reaches Site 1.

  • Packet Reaches Spine Node in Site 1:

    • The VXLAN-encapsulated ARP reply arrives at a spine switch in Site 1.

    • The spine switch translates the original VNID (Virtual Network Identifier) and class ID from Site 2 to the values used in Site 1.

    • This makes the packet compatible with the local network settings.

  • Forwarding to Local Leaf Node in Site 1:

    • The spine switch sends the packet to the leaf switch connected to EP1.

  • Leaf Node Processes the Packet:

    • The leaf switch in Site 1 de-encapsulates the VXLAN packet.

    • It learns the class ID and site location information for EP2.

    • This updates the leaf's knowledge of where EP2 is located.

  • Delivery of ARP Reply to EP1:

    • The leaf switch forwards the ARP reply to EP1.

    • EP1 receives the ARP reply and now knows EP2's MAC address.

  • Completion of ARP Exchange:

    • With the ARP exchange complete, both endpoints have each other's IP and MAC addresses.

    • The leaf switches at both sites have full knowledge of the class ID and location of the remote endpoints.

  • Ongoing Two-Way Communication:

    • From now on, traffic between EP1 and EP2 flows smoothly in both directions.

    • Each leaf switch at a site encapsulates traffic and sends it towards the O-UTEP address of the destination site.

    • This ensures efficient communication between the endpoints across sites.

Key Points to Remember:

  • O-UTEP Addresses:

    • O-UTEP A: Represents Site 1.

    • O-UTEP B: Represents Site 2.

    • Used to route traffic between sites.

  • Spine and Leaf Switch Roles:

    • Leaf Switches: Connect directly to endpoints (EP1 and EP2) and handle encapsulation.

    • Spine Switches: Handle routing between sites and translate network identifiers.

  • VXLAN Encapsulation:

    • Used to transport packets between sites over the network.

    • Encapsulates original packets to be compatible with the underlying network infrastructure.

  • Class ID and VNID Translation:

    • Ensures that packets are properly identified and routed within each site's local network settings.

  • Efficient Cross-Site Communication:

    • After the initial ARP exchange, endpoints communicate directly without additional ARP requests.

    • Traffic is encapsulated and routed based on the known location of the destination endpoint.

By following these steps, EP1 and EP2 establish communication across different sites in a Cisco ACI Multi-Site environment, allowing for seamless interaction as if they were on the same local network.


Cisco ACI MultiSite Architecture Use Cases


Flexibility for Different Business Needs:

  • Adaptable Architecture:

    • Cisco ACI MultiSite can be configured in various ways to meet specific business requirements.

    • Different scenarios involve adjusting bridge domain settings to provide different connectivity options.


Layer 2 Connectivity Across Sites Without Flooding:


  1. Stretching Key Components Across Sites:

    • Tenant, VRF, Bridge Domains, and EPGs:

      • These elements are stretched across multiple sites.

      • This means they are configured identically in each site, allowing seamless operation.


  2. Localized BUM Flooding:

    • Layer 2 BUM Traffic:

      • BUM stands for Broadcast, Unknown unicast, and Multicast traffic.

      • In this use case, BUM flooding is kept local to each site.

      • BUM traffic is not forwarded between sites, reducing unnecessary intersite traffic.


  3. Stretched Contracts Between Sites:

    • Provider and Consumer Contracts:

      • Contracts define the communication policies between EPGs.

      • These contracts are stretched between sites.

      • This ensures consistent policy enforcement and allows permitted traffic between endpoints in different sites.


Benefits of This Configuration:


  • Efficient Use of Bandwidth:

    • By localizing BUM flooding, you minimize unnecessary traffic over the intersite network.

    • This optimizes bandwidth usage between sites.


  • Simplified Management:

    • Stretching tenants, VRFs, bridge domains, EPGs, and contracts provides a consistent configuration across sites.

    • Easier to manage and maintain policies across the entire network.


  • Enhanced Control:

    • Localizing BUM traffic allows better control over broadcast domains.

    • Reduces the potential for broadcast storms affecting multiple sites.


Summary:


  • Cisco ACI MultiSite offers flexible configurations to suit different needs.

  • Layer 2 Connectivity Without Flooding Across Sites allows for stretched networks while keeping BUM traffic local.

  • Stretched Contracts ensure that communication policies are consistent and enforced across all sites.



The chosen bridge domain in the Cisco NDO user interface should be configured as depicted in this figure:



Implementing IP Mobility Across Sites Without BUM Flooding


  1. Seamless Endpoint Relocation Without BUM Flooding:

    • You can move endpoints (devices or applications) between different sites without the need to flood Broadcast, Unknown unicast, and Multicast (BUM) traffic.

    • Even after relocating an endpoint to a new site, it can still communicate with other endpoints in the same or different IP subnets at the original site.


  2. Communication with Migrated Endpoints:

    • If a new endpoint at the original site wants to communicate with the migrated endpoint:

      • ARP Requests Can Reach the Migrated Endpoint:

        • This works even without BUM flooding.

        • Requires that ARP flooding is disabled on the bridge domain at each site.

        • The migrated endpoint must already be learned (recognized) at the new site.


  3. Discovering Unlearned Migrated Endpoints:

    • If the migrated endpoint isn't yet known at the new site:

      • The ARP Glean Process will activate across sites.

      • This process helps in discovering the endpoint so communication can occur.


  4. Isolating Network Issues to Individual Sites:

    • Problems like broadcast storms within a bridge domain are confined to that specific site.

    • Such issues do not impact other connected fabrics or sites.


  5. When IP Mobility Without Layer 2 Flooding is Needed:

    a. Disaster Recovery with Cold Migrations:

    • Moving an application from Fabric 1 to Fabric 2 after a failure or planned migration.

    • While you could change the application's IP address and update DNS records, it's often preferred to keep the same IP address for simplicity.

    • IP mobility allows the application to retain its original IP address in the new site.


    b. Business Continuity with Hot Migrations:

    • Temporarily moving workloads between sites without service interruption.

    • For example, using VMware vSphere vMotion for live migration of virtual machines.

    • IP mobility ensures that services continue to run smoothly during and after the migration.


Layer 2 Connectivity Across Sites with Flooding


  1. Traditional Layer 2 Stretching with BUM Flooding:

    • This Cisco ACI MultiSite design extends Layer 2 networks across multiple sites.

    • It includes the ability to flood BUM traffic between these sites.


  2. Stretched Network Components:

    • Tenant, VRF, Bridge Domains, EPGs, and Contracts are all stretched across sites.

    • This means they are configured identically in each location, allowing for consistent policies and configurations.


  3. Forwarding BUM Traffic Between Sites:

    • BUM traffic is forwarded across sites using headend replication.

    • Spine nodes replicate BUM frames and send them to each remote fabric where the bridge domain is stretched.

    • This ensures that all endpoints receive necessary broadcast and multicast traffic, maintaining seamless Layer 2 connectivity.



To stretch a bridge domain in the Cisco NDO user interface with flooding enable, use the following option:



Reasons for Flooding BUM Traffic Across Sites:


  1. Deployment of the Same Application Hierarchy on All Sites:

    • Enables spreading workloads belonging to various EPGs (Endpoint Groups) across different fabrics.

    • Allows for the use of common and consistent policies across all sites.


  2. Active/Active High Availability Implementations Between Sites:

    • Provides continuous availability by running identical services simultaneously on multiple sites.

    • Enhances redundancy and load balancing across the network.


  3. Layer 2 Clustering:

    • Requires BUM (Broadcast, Unknown unicast, and Multicast) communication between cluster nodes.

    • Essential for cluster synchronization and heartbeat mechanisms.


  4. Live VM Migration:

    • Supports the movement of virtual machines between sites without downtime.

    • Necessitates Layer 2 connectivity to maintain VM network configurations during migration.



    Layer-3-Only Connectivity Across Sites:

    • Fundamental Requirement:

      • Ensure that only routed (Layer 3) communication is established across sites.

      • No Layer 2 extension or BUM flooding is allowed between sites.

    • Network Configuration:

      • Different bridge domains and IP subnets are defined separately at each site.

      • Layer 2 domains are confined within individual sites.

    • Communication Between EPGs:

      • In Cisco ACI, EPGs can communicate only after applying appropriate security policies using contracts.

      • Contracts define the permitted traffic and services between EPGs.

    • Types of Layer 3 Connectivity Across Sites:

      1. Intra-VRF Communication:

        • Communication occurs within the same VRF (Virtual Routing and Forwarding) instance across sites.

        • Allows for seamless routing of traffic between EPGs that are part of the same VRF.

      2. Inter-VRF Communication:

        • Involves communication between different VRF instances.

        • Requires additional configurations like route leaking or export/import policies to enable traffic flow between separate VRFs.

Summary:


  • Flooding BUM Traffic Across Sites is necessary for scenarios that require seamless Layer 2 connectivity and services like clustering, live migration, and active/active high availability.

  • Layer-3-Only Connectivity focuses on routing traffic between sites without extending Layer 2 domains, enhancing security and reducing unnecessary traffic.

  • Proper Security Policies and Contracts are essential in Cisco ACI to control and permit communication between EPGs, whether within the same VRF or across different VRFs.


For both options, the chosen bridge domain in the Cisco NDO user interface should be configured as depicted in this figure:



Layer 3 Intra-VRF Communication Across Sites


Overview:


  • Intersite Communication Within the Same VRF:

    • Enables endpoints connected to different bridge domains but part of the same stretched VRF (within the same tenant) to communicate across sites.


  • Non-Stretched EPGs and Contracts:

    • You can manage non-stretched EPGs (Endpoint Groups) across sites.

    • Contracts between these EPGs are used to control communication.


  • Use of MP-BGP EVPN:

    • MP-BGP EVPN (Multiprotocol BGP Ethernet VPN) allows the exchange of host routing information between sites.

    • Facilitates seamless intersite communication without Layer 2 extension.




Benefits of Using Cisco ACI MultiSite for Layer 3 Connectivity:


  1. No Need for Packet Re-classification at Destination Site:

    • Why It Matters:

      • The EPG classification (class-ID) is carried within the VXLAN header through the Inter-Site Network (ISN).

    • Benefit:

      • When a packet reaches the destination fabric, it retains its original classification.

      • Eliminates the need for additional configurations to re-classify packets.

    • Comparison with L3Outs Without MultiSite:

      • Without ACI MultiSite, packets are de-encapsulated when leaving the source fabric.

      • The destination fabric must re-classify packets upon entry via the L3Out, requiring extra configuration.


  2. Simplified Contract Configuration:

    • Why It Matters:

      • Contracts define communication policies between EPGs.

    • Benefit:

      • Contracts can be stretched across sites, allowing direct configuration between source and destination EPGs as if they are in the same fabric.

      • Simplifies policy management and maintenance.

    • Comparison with L3Outs Without MultiSite:

      • Separate contracts are needed between the source EPG and L3Out in the source fabric.

      • Additional contracts are required between the destination EPG and L3Out in the destination fabric.

      • Increases complexity and administrative overhead.


  3. Easier Use of the Same IP Subnet Across Fabrics:

    • Why It Matters:

      • Sometimes, you need to use the same IP subnet in multiple sites.

    • Benefit:

      • With ACI MultiSite, you can easily use the same IP subnet on different fabrics.

      • Requires only simple IP connectivity using addresses like O-UTEP (Overlay Unicast Tunnel Endpoint) and O-MTEP (Overlay Multicast Tunnel Endpoint).

    • Comparison with L3Outs Without MultiSite:

      • Requires deploying additional Layer 2 Data Center Interconnect (DCI) technologies in the external network.

      • Adds complexity and potential costs.


  4. Simpler Extension of Multiple VRFs Across Fabrics:

    • Why It Matters:

      • Extending VRFs across sites allows for consistent network policies and segmentation.

    • Benefit:

      • ACI MultiSite simplifies extending multiple VRFs across fabrics.

      • Only requires simple IP connectivity with a few IP addresses (O-UTEP, O-MTEP).

      • Easier to maintain and scale.

    • Comparison with L3Outs Without MultiSite:

      • The external network between L3Outs must support VRF separation or L3VPN.

      • Requires maintaining routing protocols and databases between sites.

      • Increases complexity and potential for misconfigurations.


Summary:


  • ACI MultiSite Enhances Layer 3 Connectivity:

    • Provides efficient intersite communication within the same VRF.

    • Reduces complexity in network configuration and management.

  • Key Advantages Over Traditional L3Out Connections:

    • Maintains EPG Classification:

      • No need for re-classification at the destination.


    • Simplifies Contracts:

      • Directly apply contracts between EPGs across sites.


    • Facilitates IP Subnet Usage:

      • Use the same IP subnets without additional Layer 2 technologies.


    • Eases VRF Extension:

      • Simplifies extending and managing multiple VRFs across sites.



Layer 3 Inter-VRF Communication Across Sites


1. Purpose and Scenario:

  • Shared Services Across Sites:

    • Allows you to create provider EPGs (Endpoint Groups) in one group of sites.

    • These EPGs offer shared services to consumer EPGs in another group of sites.



2. Key Concepts:

  • Different VRF Instances:

    • Source and destination bridge domains are in different VRFs.

    • VRFs may belong to the same tenant or different tenants.

  • Route Leaking:

    • Required to enable communication between different VRFs.

    • Achieved by creating a contract between the source and destination EPGs.

  • Global Scope Contracts:

    • The provider contract (e.g., Contract C2) must be set to global scope.

    • This allows it to be used between EPGs across different tenants.


3. Example Scenario:

  • Tenants and EPGs:

    • Tenant 1 has Web and App EPGs.

    • Tenant BigData provides a BigData EPG as a shared service.

  • Communication Setup:

    • Contract C2 (with global scope) is established between the EPGs.

    • VRF Route Leaking enables communication across VRFs.

4. Benefits:

  • Isolation and Security:

    • Maintains the isolation of tenants and VRFs.

    • Security policies remain intact.

  • Shared Services Support:

    • Supports shared services with non-overlapping and unique subnets.


5. Verification Steps:

  • Check BGP Node Role:


    pod35-spine1# moquery -d sys/bgp/inst | egrep "dn|.*Role"

    dn : sys/bgp/inst

    spineRole : msite-speaker


  • In BGP Configuration:


pod35-spine1# show bgp internal node-role

Node role : MSITE_SPEAKER



Verify BGP L2VPN EVPN and OSPF Sessions:


Check OSPF Neighbors:


pod35-spine1# show ip ospf neighbors vrf overlay-1

OSPF Process ID default VRF overlay-1

Total number of neighbors: 2

Neighbor ID Pri State Up Time Address Interface

10.10.35.100 1 FULL/ - 02:06:51 10.10.35.2 Eth2/5.37

10.10.35.100 1 FULL/ - 02:06:27 10.10.35.6 Eth2/6.38


Check BGP EVPN Summary:


pod35-spine1# show bgp l2vpn evpn summary vrf overlay-1

BGP summary information for VRF overlay-1, address family L2VPN EVPN

BGP router identifier 10.10.35.111, local AS number 135

BGP table version is 26, L2VPN EVPN config peers 1, capable peers 1

13 network entries and 9 paths using 1864 bytes of memory

BGP attribute entries [4/576], BGP AS path entries [1/6]

BGP community entries [0/0], BGP clusterlist entries [0/0]


Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd

10.10.35.112 4 136 126 124 26 0 0 01:54:15 3


Note:

  • Host routes are exchanged only if there is a cross-site contract allowing communication between endpoints.


6. MP-BGP EVPN Usage:

  • Purpose:

    • MP-BGP EVPN is used to exchange endpoint information across sites.

  • Peering Support:

    • Supports both MP-iBGP (Internal BGP) and MP-EBGP (External BGP) peering.

  • Remote Host Routes:

    • EVPN Type-2 routes are associated with the remote site's Anycast DP-ETEP address.


Summary:


  • Layer 3 Inter-VRF Communication:

    • Enables sharing services between different VRFs and tenants.

    • Maintains isolation and security through proper configurations.

  • Key Configurations:

    • Use global scope contracts for shared services.

    • Implement route leaking to allow inter-VRF communication.

  • Verification and Monitoring:

    • Regularly verify BGP and OSPF sessions on spine nodes.

    • Use Cisco Nexus Dashboard for centralized management and monitoring.



Cisco Nexus Dashboard Overview:


  • Unified Management Platform:

    • A single interface to monitor and scale across different sites.


  • Supports Multiple Controllers:

    • Works with Cisco ACI fabric controllers and Cisco APIC.

    • Integrates with Cisco Nexus Dashboard Fabric Controller (NDFC).

    • Compatible with Cloud APIC in public cloud environments.


  • Benefits for DevOps:

    • Enhances application deployment for multicloud applications.

    • Facilitates Infrastructure-as-Code (IaC) integrations.



MSO TSHOOT


Node Unknown/Down status


[root@node1 ~]# docker node ls



Inspect a node using “docker node inspect ”


[root@node1 ~]# docker node inspect node2 –pretty



Check Docker Application Engine on failed node



Docker service restart

' systemctl stop docker.service ' then ' systemctl start docker.service' <-- root user


Remove a failed node from swarm cluster


Check STATUS of one or more nodes through ‘docker node ls’


[root@node1 ~]# docker node ls



Demote the failed node from manager to worker


[root@node1 ~]# docker node demote node2

Manager node2 demoted in the swarm.


Remove the failing node from swarm cluster


[root@node1 ~]# docker node rm node2

node2


Join a new node to the swarm cluster


[root@node2 ~]# docker swarm join-token manager



Docker Swarm Initialization



[root@node2 ~]# cd /opt/cisco/msc/builds/msc_2.2.b/prodha

[root@node2 ~]#./msc_cfg_init.py



149 views0 comments

Recent Posts

See All

Comments


bottom of page