There are three main ways to deploy firewalls in a Cisco ACI Multi-Pod environment:
Active-Standby Firewall Pair Stretched Across Pods
Deploy a single active firewall with a standby firewall across multiple pods.
Suitable for both incoming/outgoing (north-south) and internal (east-west) traffic.
Prevents uneven traffic paths that might cause communication issues.
May lead to inefficient traffic flow, as some data might loop through the Interpod Network (IPN), causing delays.
Requires careful planning of bandwidth between pods and consideration of potential latency.
Supported in both transparent and routed modes with separate Layer 3 (L3Out) connections to external networks.
Works with traditional border leaf nodes and GOLF routers for external connectivity.
Active-Active Firewall Cluster Stretched Across Pods
Available starting with Cisco ACI Release 3.2(4d).
Uses a cluster where all firewall nodes share the same MAC and IP addresses (known as "split spanned EtherChannel").
Appears as a single firewall to the network fabric.
Eliminates concerns about uneven traffic paths for all traffic types.
Automatically directs traffic to the firewall node handling each specific connection.
Independent Active-Standby Firewall Pair in Each Pod
Each pod has its own active-standby firewall pair without synchronization between pods.
Requires maintaining symmetric traffic flow through the firewalls since connection states aren't shared.
Achieved by deploying symmetric Policy-Based Redirect (PBR):
Recommended solution.
Define a PBR policy that includes multiple active service nodes.
Cisco Nexus 9000 Series switches apply the PBR policy to consistently choose the same firewall for both directions of a connection.
Before Cisco ACI Release 5.0, PBR requires firewalls to be in routed mode.
From Release 5.0 onwards, firewalls can be in inline/transparent mode (also called L1/L2 mode).
Can integrate with external networks using traditional border leaf nodes or GOLF nodes.
If symmetric PBR isn't possible (only for north-south traffic):
Ensure incoming and outgoing traffic is symmetric and efficient.
Use detailed host-route advertisement to direct traffic to the correct pod where the destination resides.
Host-route advertisement is supported on regular L3Outs from Cisco ACI Release 4.0 onwards.
Prior to Release 4.0, only supported on GOLF L3Outs.
Best Practice Recommendation
It is recommended to configure external Layer 3 connections (L3Outs) on border leaf nodes rather than using the GOLF approach, as this is the most common and efficient deployment method.
Active-Standby Firewalls Pair Across Pods
Common Setup:
Use two firewalls: one active and one standby.
Place the active firewall in one pod (e.g., Pod1) and the standby in another (e.g., Pod2).
Traffic Routing:
All external traffic (north-south) and internal traffic (east-west) must go through the pod with the active firewall.
Tenant-Specific Firewalls:
Deploy separate physical or virtual firewalls for each tenant (using Virtual Routing and Forwarding [VRF]).
Firewall Connections:
Virtual Port Channel (vPC): Connect each firewall node to two Cisco ACI leaf nodes for redundancy.
Local Port Channel: Connect each firewall node to a single leaf node if not using vPC.
Configuration Options:
Different setups depend on how you configure the firewalls.
Consider whether the firewall is handling north-south traffic (external) or east-west traffic (internal).
Use Cases:
North-South Traffic: Protects communication between your network and external networks.
East-West Traffic: Protects communication within your internal network between different devices or services.
This setup ensures reliable firewall protection by having a backup firewall ready in another pod and allows flexibility in how firewalls are connected and used for different types of network traffic.
For each tenant (VRF instance) needing firewall policies, you can deploy a separate physical or virtual firewall.
Each firewall can connect to:
A pair of Cisco ACI leaf nodes using a Virtual Port Channel (vPC), or
A single leaf node using a local port channel.
Different firewall setups require different considerations, which we'll explain for:
North-south traffic (entering or leaving the data center)
East-west traffic (within the data center)
Option 1: Transparent Firewall
North-South Perimeter Firewall Integration
This setup manages traffic between the external network and the web servers (Web EPG).
To route traffic through the transparent firewall, a "bridge domain sandwich" design is used (see Figure 4).
The default gateway for web servers is:
Deployed in the Cisco ACI fabric as a distributed anycast gateway.
Located in the bridge domain where the firewall's external interface connects.
Different from the bridge domain of the web servers.
This configuration ensures all external-bound traffic passes through the transparent firewall.
You can choose to use any service graph model or none at all.
Traffic is directed through the active firewall based on Layer 2 lookups by Cisco ACI leaf nodes for:
Traffic from servers to external destinations (south-to-north)
Traffic from external sources to servers (north-to-south)
Note:
A managed-mode service graph requires:
A device package.
An L4-L7 logical device with one or more actual devices.
The L4-L7 device is managed via a single management IP, allowing the APIC to push configurations.
If the active and standby firewalls synchronize configurations (as they usually do), you can use a managed-mode service graph.
When using traditional L3Out for external connections:
The web server subnet is advertised through border leaf nodes in different pods.
Incoming traffic might be directed to any pod with a local L3Out interface, regardless of the active firewall's location.
If the destination server and active firewall are in the same pod receiving incoming traffic:
Traffic doesn't unnecessarily cross the Inter-Pod Network (IPN) (refer to Figure 5).
Entities in Different Pods:
If service nodes and endpoints are connected in different pods, traffic will be rerouted (hair-pinned) across the Inter-Pod Network (IPN).
This rerouting happens so that traffic passes through the service node before reaching the web destination.
Worst-Case Scenario (Figure 6):
Traffic may bounce twice across the IPN to deliver external traffic to a web endpoint in Pod2.
Return Traffic Behavior:
By default, return traffic from the web endpoint uses the local L3Out connection in the pod where the active service node is located.
This is illustrated in Figure 7.
Improving Traffic Efficiency:
To reduce inefficient traffic patterns (like in Figure 6), introduce a "home site" for each service or application.
Home Site Definition:
The pod where the active service node is normally deployed (excluding failure situations).
Ensure that routing information sent to the WAN for an IP subnet (extended across pods using Cisco ACI Multi-Pod Layer 2 extensions) directs incoming traffic to the home site.
This optimization is shown in Figure 8.
Establishing Preferred Inbound Paths:
You can create preferred inbound paths in several ways, depending on your network setup.
The specific methods are not covered in this document.
Efficient Traffic Handling:
With this approach, traffic going to local endpoints (like Web VM1 in Figure 8) is handled efficiently.
In the worst-case scenario where the endpoint moves to a different site:
Traffic crosses the Inter-Pod Network (IPN) only once.
This avoids the earlier worst-case situation of multiple crossings.
Alternative Method – Enable Host-Route Advertisement:
You can enable host-route advertisement from the Cisco ACI fabric to send detailed routing information to the external network.
This feature is available on ACI border leaf nodes starting from ACI release 4.0.
For earlier versions, you need to use GOLF L3Outs (Generalized Object-Level Filtering).
How This Scenario Works:
All web endpoint IP addresses are learned in the bridge domain connected to the external service node interface.
This is because the default gateway is defined there.
Therefore, host routes for all endpoints are advertised from the pod where the active service node is located.
This setup optimizes communication only with endpoints in the same pod (see Figure 9).
Issue with Endpoints in Different Pods:
If the destination endpoint is in a different pod than the active service node:
Traffic gets rerouted (hair-pinned) between the service node's internal interface and the endpoint (Figure 10).
In Summary:
Enabling host-route advertisement with a perimeter transparent firewall doesn't always optimize communication with the external network.
However, it prevents the worst-case scenario of double hair-pinning shown in Figure 6.
East-West Firewall Integration Use Case
Design Overview (Figure 11):
Similar to the north-south firewall design.
Consumer and provider endpoints are on the same IP subnet.
They can only communicate by passing through the transparent firewall.
This is enforced by the Layer 2 forwarding tables on Cisco ACI leaf nodes.
For example, the MAC addresses of endpoints in the web EPG are learned via the firewall's internal interface, and vice versa.
Service Graph Options:
You can use:
An unmanaged-mode service graph.
A managed-mode service graph.
No service graph at all.
Traffic Behavior:
All in the Same Pod (Figure 12):
If the source endpoint, destination endpoint, and active service node are all in the same pod:
Traffic stays within that pod.
Endpoints or Service Node in Different Pods (Figure 13):
If either the source or destination endpoint is in a different pod from the active service node:
Traffic must cross (hair-pin) the Inter-Pod Network (IPN).
Note:
This behavior applies regardless of whether the two EPGs are:
Part of separate Bridge Domains (BDs) under the same Virtual Routing and Forwarding instance (VRF).
Part of separate BDs under different VRFs.
Within the same tenant or across different tenants.
Option 2: Routed Firewall as Default Gateway for the Endpoints
Deployment Overview:
The Cisco ACI fabric provides Layer 2 connectivity.
Endpoints use the firewall as their default gateway.
The firewall routes traffic between different IP subnets and to external networks.
Firewall Connections:
Internal Interfaces:
Connected to the same Endpoint Groups (EPGs) and Bridge Domains (BDs) as the endpoints.
External Interface:
Can connect to a specific EPG/BD for Layer 2 connectivity to external routers.
Alternatively, the firewall can connect directly to external routers.
No L3Out Connections Needed:
The firewall is inserted into traffic paths based on Layer 2 lookups by ACI leaf nodes.
No Layer 3 Out (L3Out) connections are defined in the Cisco ACI fabric.
Traffic Paths Similar to Option 1:
North-south and east-west traffic flows are similar to those in Option 1.
The key difference is the absence of L3Out connections to external routers.
External routers connect to the same EPG/BD as the firewall's external interface.
Note on Traffic Hair-Pinning:
Traffic may still "hair-pin" across the Inter-Pod Network (IPN) based on endpoint locations and traffic entry points.
This occurs because there's only one active firewall path available.
Option 3: Routed Firewall with L3Out Peering with the Cisco ACI Fabric
Design Overview:
Firewalls connect to the ACI fabric using L3Out peering (see Figure 15).
Utilizes ACI's anycast gateway function within the bridge domain subnet.
The firewall acts as a Layer 3 hop at the front of the VRF instance.
Purpose of This Approach:
Internal VRF communication doesn't pass through the perimeter firewall.
The firewall enforces security policies on:
North-south traffic between VRF instances.
Traffic to and from the external Layer 3 domain.
Configuration Details:
L3Out Connections:
Established between the fabric and both the firewall's internal and external interfaces.
VRF "Sandwich" Setup:
Web subnet and firewall's internal interface belong to VRF2.
Firewall's external interface and WAN edge router interface belong to VRF1.
Service Graph Flexibility:
You can use any service graph model or none at all.
Traffic enforcement through the firewall is based on Layer 3 lookups by ACI leaf nodes.
Options for L3Out Connections:
Option 1: Separate L3Outs per pod, each with local leaf nodes connected to the firewall.
Option 2 (Recommended): A single L3Out that includes all leaf nodes across pods connected to the firewalls.
Key Considerations:
Bridge Domains:
Each L3Out has an associated external bridge domain.
Separate L3Outs mean unique bridge domains per pod.
A single L3Out extends the bridge domain across pods.
Use of Separate L3Out Connections Across Pods
Design Characteristics (Figure 16):
Creates two distinct bridge domains, one per pod.
Bridge domains are confined within their respective pods.
IP Addressing:
Leaf nodes in each pod can share IP subnets for their Switch Virtual Interfaces (SVIs) connecting to the firewall.
Similar (but different) IP subnets are used for connecting to external and internal firewall interfaces.
Routing Behavior:
Only leaf nodes connected to the active firewall establish routing adjacencies.
Ensures all traffic from web endpoints is directed to those leaf nodes, regardless of pod location (see Figure 17).
Important Notes:
Cisco ASA Compatibility:
Separate L3Outs work because the standby unit assumes the active unit's IP and MAC addresses upon failover.
Third-Party Firewalls:
If using VRRP or similar protocols, you must extend the bridge domain across pods using a single L3Out.
Firewall Failover Implications:
Failover activates the standby firewall in the other pod.
Routing adjacencies need to be re-established, causing potential traffic outages (Figure 18).
Dynamic routing protocols may prolong outage durations.
Limitations with Static Routing:
Static routing doesn't work with separate L3Outs (unsupported).
Leaf nodes connected to the standby firewall may incorrectly advertise routes, causing traffic misdirection (Figures 19 and 20).
Note on IP-SLA Tracking:
Available from Cisco ACI release 4.1 onward.
Helps remove static routes on leaf nodes connected to the standby firewall when the active firewall is unreachable.
Limits: Up to 100 IP-SLA tracking IPs per leaf node and 200 per ACI fabric.
Summary:
Separate L3Outs require dynamic routing or static routing with IP-SLA.
Potential for longer outages during failover.
Recommendation: Use a single L3Out connection across pods.
Use of a Single L3Out Connection Across Pods
Design Characteristics (Figure 21):
Extends the external (or internal) bridge domain across pods.
Both pods' leaf nodes establish routing adjacencies with the active firewall.
Advantages:
Faster convergence during firewall failover events (Figure 22).
Routing adjacencies remain intact, reducing traffic disruption.
Traffic Flow Considerations:
All leaf nodes advertise routes learned from the active firewall.
Traffic from endpoints may initially go to local service leaf nodes.
If the active firewall is in a different pod, traffic "bounces" to that pod (Figure 23).
Requirements:
For Firewalls Connected in vPC Mode:
Supported from Cisco ACI releases 2.1(3), 2.2(3), 2.3(1), and 3.0(1).
Requires Cisco Nexus 9000 Series EX or FX leaf switches.
Alternative Design:
Connect firewalls to a single leaf node via a physical interface or port channel (Figure 24).
Supported from Cisco ACI releases 2.1(2) onward on all leaf switches.
External Connectivity Considerations:
An L3Out is also needed between the ACI fabric and WAN edge routers.
Server subnets are advertised through border leaf nodes in different pods.
Incoming external traffic may arrive at any pod, possibly causing traffic to traverse the IPN.
Minimizing Hair-Pinning:
If the destination endpoint, active firewall, and external router are in the same pod, hair-pinning is avoided (Figure 25).
Defining a "home site" for services can minimize unnecessary IPN traversal.
GOLF L3Outs:
Similar traffic optimization as Border Leaf L3Outs.
No significant advantages in this context.
Design Caution:
Issues may arise if endpoints are connected to border leaf nodes with the L3Out configured (Figures 26 and 27).
Solution: Enable "Disable Remote EP Learn" in VRF instances using ingress policy enforcement to prevent stale endpoint learning.
East-West Firewall Integration Use Case
Design Overview (Figure 28):
Similar to north-south firewall integration with L3Out peering.
Traffic between web and application EPGs always passes through the active firewall.
Firewall location (pod) doesn't affect this behavior.
Routing can be dynamic or static.
Service Graph Flexibility:
Options include unmanaged-mode, managed-mode, or no service graph.
Alternative Deployment:
Use virtual firewall instances for each VRF.
Enable inter-VRF communication via a "fusion router" (Figure 29).
The fusion router can be the ACI fabric or an external router.
Traffic Flow Scenarios:
All in the Same Pod (Figure 30):
Traffic remains within the pod, optimizing performance.
Endpoints in Different Pods (Figure 31):
If either the source or destination is in a different pod from the active firewall, traffic crosses the IPN.
Note on Static Routing:
Return traffic may initially go to the leaf connected to the standby firewall.
Traffic then traverses pods to reach the active firewall.
Similar considerations as with a single L3Out in north-south traffic apply.
Conclusion:
When both endpoints are in different pods from the active firewall, traffic will "hair-pin" across the IPN.
Option 4: Routed firewall with PBR
North-South Firewall Integration Using PBR
Simplified Configuration with PBR:
Policy-Based Routing (PBR) simplifies firewall insertion.
Eliminates the need for complex VRF sandwich setups.
Traffic is redirected to the active firewall based on policies.
Requirements for Using PBR:
Routing Mode:
Both the PBR node (firewall) and the Cisco ACI fabric must operate in routed mode.
Interface Connections:
PBR service node interfaces must be part of regular bridge domains.
Should not be connected via L3Out connections.
Bridge Domain Options:
Service node interfaces can be in:
The same bridge domains as consumer/provider endpoints, or
Separate, dedicated bridge domains (as in Figure 33).
Data-Plane Learning:
For first-generation leaf switches:
Disable data-plane learning on bridge domains connecting to PBR node interfaces (e.g., BD2 and BD3).
Hardware Considerations:
First-Generation Nexus 9000 Series Switches:
Must use dedicated service leaf nodes for connecting the PBR node.
EX and FX Platform Leaf Switches:
Consumer or destination endpoints can connect directly to service leaf nodes.
Virtual MAC Address (vMAC):
Active service nodes must use the same vMAC address.
Traffic redirection changes the destination MAC to this vMAC.
On firewall failover, the standby unit must adopt the same vMAC (supported by Cisco ASA and Cisco Firepower).
Deployment Modes:
Two-Arm Mode: Firewall connects via two interfaces (internal and external).
One-Arm Mode (Preferred):
Firewall connects with a single interface to a service bridge domain.
Simplifies routing—only needs a default route to the ACI anycast gateway.
How PBR Works in This Setup:
Ensures all external traffic passes through the active firewall before reaching web servers.
Operates based on policies, overriding normal routing tables.
Leaf nodes might suggest direct paths since L3Out and web bridge domain are in the same VRF.
Important for Troubleshooting:
Remember that PBR forces traffic through the firewall despite routing tables.
Traffic Flow with Traditional L3Out and PBR:
Web server subnets are advertised through border leaf nodes in different pods.
Incoming traffic from the WAN may arrive at any pod.
PBR Policy Application:
Applied on leaf nodes that know both the consumer and provider EPG class IDs.
Based on source/destination EPGs and contract filters.
Outcome:
Traffic is redirected to the active firewall.
Then forwarded to the destination endpoint.
Behavior with Ingress Policy Enforcement:
Scenario:
Internal EPG and L3Out are in the same VRF (VRF1).
Default VRF setting is ingress policy enforcement.
Border Leaf Nodes:
Do not learn location information for internal endpoints when PBR contracts exist.
Traffic Flow:
External traffic is encapsulated to the spine proxy VTEP.
Spine node forwards it to the leaf node where the destination endpoint resides.
At the Destination Leaf:
PBR policy is applied.
Traffic is redirected to the active firewall.
After Firewall Processing:
Traffic returns to the destination endpoint.
Uses spine proxy services again due to endpoint learning limitations on service leaf nodes.
Best-Case Traffic Scenario (Figure 34):
Conditions:
Destination endpoint and active firewall are in the same pod.
Ingress traffic is directed to that pod.
Result:
Traffic follows the optimal path.
No unnecessary crossing (hair-pinning) over the Inter-Pod Network (IPN).
Return Traffic Handling (Figure 35):
PBR policy is applied directly on the leaf node connected to the web server VM.
Traffic is redirected to the active firewall.
Then sent back to the external Layer 3 domain.
Note:
This behavior assumes the VRF policy enforcement direction is set to ingress (default).
Changing it to egress would shift PBR policy application to border leaf nodes.
Suboptimal Scenarios:
If incoming traffic arrives at a pod without the destination endpoints:
Traffic must hair-pin across the IPN to reach the active firewall.
This inefficiency is due to:
The firewall being deployed in active/standby mode across pods.
The distribution of endpoints and service nodes across different pods.
Considerations with GOLF L3Outs:
Similar traffic patterns and limitations apply.
Host route advertisement doesn't significantly improve traffic optimization with active/standby firewalls.
East-West Firewall Integration Using PBR
Design Overview (Figure 36):
EPGs (Endpoint Groups) are within the same VRF.
Firewall's internal and external interfaces connect to dedicated bridge domains.
Different from those used by consumer and provider endpoints.
Note:
From release 3.1 onward, firewall interfaces can connect to the same bridge domains as endpoints.
Service Graph Options:
Can use unmanaged-mode or managed-mode service graphs.
PBR Policy Application Factors:
Depends on whether the leaf switch knows both source and destination EPG class IDs.
Worst-Case Scenario Assumption:
Leaf nodes haven't learned remote endpoint class IDs.
Occurs when the PBR policy is the only contract between the EPGs.
Best-Case Traffic Flow (Figures 37-39):
Conditions:
Source endpoint, destination endpoint, and active firewall are all in the same pod.
Result:
Traffic remains within the pod.
PBR policy is applied on the destination leaf.
Traffic is redirected to the firewall and then to the destination.
Worst-Case Traffic Flow (Figures 40-41):
Conditions:
Source or destination endpoint is in a different pod from the active firewall.
PBR policy is applied on the destination leaf.
Result:
Traffic crosses the IPN twice (double hair-pinning).
Leads to suboptimal performance due to increased latency.
Impact of Distributing Endpoints and Firewalls Across Pods:
Spreading endpoints and service nodes across different pods can cause inefficient traffic paths.
Double hair-pinning negatively affects network performance.
Firewalls Between Separate VRFs/Tenants:
When Inserting a Firewall Between EPGs in Different VRFs:
PBR policy is always applied on the consumer leaf node.
Worst-Case Scenario:
Consumer endpoint is in one pod.
Provider endpoint and active firewall are in another pod.
Results in increased hair-pinning across the IPN.
Important Notes:
The PBR node (firewall) must be in either the consumer or provider VRF.
Cannot be in a different VRF that is neither consumer nor provider.
Key Takeaways:
Optimizing Traffic Paths:
For best performance, align the locations of endpoints and active service nodes within the same pod.
Be cautious of how endpoint and firewall placement affects traffic flow.
PBR Configuration Considerations:
Understand where PBR policies are applied to anticipate traffic patterns.
Be aware of hardware limitations and configure bridge domains accordingly.
Deployment Planning:
When possible, use one-arm mode for PBR nodes to simplify configuration.
Plan firewall deployments to minimize cross-pod traffic, especially in active/standby setups.
Troubleshooting Tips:
Remember that PBR forces traffic through specified paths regardless of routing tables.
Always consider the impact of VRF settings (ingress vs. egress policy enforcement) on PBR behavior.
Option 5: Routed firewall with L3Out peering and PBR
Using the Same Routed Firewall for North-South and East-West Traffic with PBR
Overview:
This approach employs a single routed firewall to handle both north-south and east-west traffic.
North-South Traffic: Managed through traditional Layer 3 lookups.
East-West Traffic: Directed using Policy-Based Routing (PBR).
Requirement: Cisco APIC Release 5.2 or later, which allows PBR destinations to connect to an L3Out instead of a Service Bridge Domain (BD).
Benefits:
Deploy one instance of the firewall for all traffic types.
Ensure security policies are enforced on external traffic before it reaches the ACI border leaf nodes.
North-South Perimeter Firewall Integration
Design Overview (See Figure 43):
The firewall's internal interface connects to the Cisco ACI fabric via L3Out peering.
Design Options:
Option 1: Both external and internal firewall interfaces connect to the ACI fabric via L3Out peerings (as shown in Figure 15 of Option 3).
Option 2: External interface connects through an L2 bridge domain; internal interface connects via L3Out peering.
Option 3: External interface is not connected to the ACI fabric; internal interface connects via L3Out peering.
Key Difference from Option 3:
The external interface of the firewall is not connected to the ACI fabric.
This ensures all traffic between the external network and the ACI fabric passes through the firewall.
Advantages:
Utilizes the ACI fabric's anycast gateway function within the bridge domain subnet.
Positions the firewall as an external Layer 3 next hop in the VRF.
Internal Communication:
Traffic within the VRF does not pass through the perimeter firewall.
Security Enforcement:
Firewall applies security policies to:
North-south traffic communicating with external Layer 3 domains.
Optionally, traffic to resources in different tenants or VRFs.
Configuration Details:
Establish L3Out connections between the ACI fabric and the firewall's internal interface.
The firewall's external interface connects outside the ACI fabric.
No service graph is required for this design.
Traffic is routed through the active firewall based on Layer 3 lookups by ACI leaf nodes (refer to Figure 44).
Recommendation:
Use the same L3Out connections across ACI leaf nodes and active-standby firewalls deployed in different pods, as suggested in Option 3.
East-West Firewall Integration Using PBR
Design Overview (See Figure 45):
Demonstrates east-west routed firewall insertion with PBR when Endpoint Groups (EPGs) are within the same VRF.
Key Difference from Option 4:
The firewall connects through the external L3Out used for north-south traffic inspection.
Requirements:
An unmanaged-mode service graph.
Cisco APIC Release 5.2 or later to support PBR destinations connected to an L3Out.
Benefits:
Allows the same firewall to inspect both north-south and east-west traffic.
Traffic Inspection:
North-South Traffic: Automatically inspected by the perimeter firewall through normal routing.
East-West Traffic: Inspected by the same firewall using PBR when necessary.
Design Options:
Option 1: Use the same bridge domain subnet for both consumer and provider EPGs.
Option 2: Use different VRFs for the consumer and provider EPGs.
The L3Out for the PBR node must be in either the consumer or the provider VRF.
Active-Active firewall cluster stretched across separate pods
Overview:
An active/active firewall cluster is integrated into the Cisco ACI Multi-Pod fabric.
Firewall nodes in the cluster are all active and deployed in separate pods.
All nodes share the same MAC and IP address, appearing as a single logical device to the network.
This model is supported by Cisco ASA and Cisco Firepower appliances and is known as the "split spanned EtherChannel" model.
Deployment Considerations:
Same IP and MAC Address:
All firewall nodes in the cluster must use the same IP and MAC address.
If each node uses its own IP and MAC, it becomes an independent active-standby pair in each pod.
Pod Deployment:
It's not necessary to deploy firewall nodes in every pod.
Leaf Node Connections:
All firewall nodes in a pod must connect to a single pair of ACI leaf nodes.
A single Virtual Port Channel (vPC) is configured on these leaf nodes to connect all local firewall nodes.
This setup makes the ACI fabric perceive the cluster as one logical device.
Cluster Size and Latency:
Up to 16 firewall nodes can be clustered together.
The maximum recommended round-trip latency (RTT) for the cluster is around 20 milliseconds.
This is less than the 50 milliseconds RTT supported between pods in an ACI Multi-Pod fabric.
Operating Mode:
The firewall cluster must be deployed in routed mode.
Deploying an active/active cluster in Layer 2 mode across pods is not supported.
Hardware Requirements:
Leaf switches must be Cisco Nexus 9000 Series EX or FX platforms.
Software Considerations:
Prior to Cisco ACI release 4.1, you must disable the IP aging policy.
Anycast Service Feature (Cisco ACI Release 3.2(4d)):
Before this release, using the same MAC/IP on firewall nodes in different pods caused duplicate IP/MAC entries.
The "anycast service" feature allows configuring the cluster's MAC/IP as an anycast endpoint.
Each pod's spine learns the local anycast IP/MAC and prefers the local path.
If local firewall nodes fail, traffic switches to firewall nodes in another pod.
Learning and Preference:
The MAC/IP is learned only on leaf nodes where firewall nodes are directly connected.
These leaf nodes send updates to spine nodes.
Spine nodes prefer the local anycast entry, using other pods as backup paths.
Traffic Redirection and Inter-Cluster Link (ICL):
Importance of Traffic Redirection:
The firewall cluster must support traffic redirection across nodes to prevent dropped connections.
Ensures continuity regardless of which node receives the traffic.
Cisco ASA/FTD Cluster Behavior:
Connection state is owned by the node that received the first packet of the connection.
If another node receives packets for that connection, it redirects them via the ICL to the owning node.
ICL Communication Setup:
All firewall nodes have an interface in the same Layer 2 domain and IP subnet for ICL communication.
In Cisco ACI Multi-Pod integration:
A dedicated Endpoint Group (EPG) or Bridge Domain (BD) can be used for the ICL.
This allows ICL connectivity to extend across pods.
ICL can use dedicated interfaces or a dedicated VLAN over the same vPC used for data traffic.
Deployment with L4-L7 Service Graph and PBR:
As of release 3.2(4d), deploying anycast services within a Multi-Pod fabric is supported using:
Layer 4-7 (L4-L7) service graphs.
Policy-Based Routing (PBR).
Operation:
The default gateway for endpoints is placed on the ACI fabric.
Traffic is intelligently redirected to the firewall based on policies between specific EPGs.
Support:
This deployment option is available starting with Cisco ACI release 3.2(4d).
Applicability to Traffic Types:
The active/active firewall cluster can be used for both:
East-West Traffic: Communication within or between Virtual Routing and Forwarding instances (VRFs).
North-South Traffic: Communication entering or leaving the data center.
Note:
Currently, anycast service firewall clusters are not supported when connected to the fabric via L3Out connections
Deployment as part of a L4–L7 service graph with PBR
Integrating Firewall Clusters Using L4-L7 Service Graph with PBR
Flexible Integration Approach:
Use an L4-L7 service graph combined with Policy-Based Routing (PBR).
Fully leverages Cisco ACI fabric for both Layer 2 and Layer 3 forwarding.
Redirects only specific traffic (as defined by PBR) to the anycast service.
Supported starting from Cisco ACI release 3.2(4d).
Deployment Considerations:
Dedicated Service Bridge Domain:
The PBR node's bridge domain must not be the same as the consumer or provider bridge domain.
You need a separate service bridge domain for the PBR node.
Inter-VRF Contract Limitation:
Cannot use an inter-VRF contract if vzAny provides the contract.
Applicability:
Suitable for both east-west (within data center) and north-south (entering/exiting data center) traffic flows.
North-South Firewall Integration Use Case
Setup Overview (Refer to Figure 48):
Endpoints connected to the Cisco ACI fabric use the fabric as their default gateway.
Traffic between these endpoints and external Layer 3 networks is redirected to the firewall cluster using PBR policies.
Key Deployment Considerations:
PBR Policy Configuration:
Only need to specify a single MAC/IP entry representing the logical firewall service of the active/active cluster.
The anycast service in the local pod where the PBR policy is applied is preferred.
If all local firewall nodes fail, traffic will automatically use remote nodes.
Firewall Deployment Modes:
Two-Arms Mode:
Firewall uses two interfaces, each connected to a separate bridge domain.
One-Arm Mode (Recommended):
Firewall uses a single interface connected to a "service bridge domain" in the Cisco ACI fabric.
Advantages of One-Arm Mode:
Simplifies routing configuration on the firewall.
Only requires a default route pointing to the anycast gateway IP of the service bridge domain.
Ensures seamless communication with the rest of the network infrastructure.
Inbound Traffic Scenario (Figure 49)
Connection Setup:
The Multi-Pod fabric is linked to an external Layer 3 network.
It uses separate border leaf L3Out connections in different pods.
Worst-Case Behavior:
Traffic enters the network through the "wrong" pod.
The traffic must loop back (hairpin) across the Inter-Pod Network (IPN) to reach the correct leaf node.
Policy Application:
A Policy-Based Routing (PBR) policy is used.
The PBR redirects traffic to a local anycast service at the destination leaf node.
Outbound Traffic Flow (Figure 50)
Traffic Path:
The PBR policy is applied on the same compute leaf where the traffic originates.
Service Selection:
Traffic is directed to the local anycast service first.
External Sending:
After selecting the anycast service, traffic is sent out to the external Layer 3 network via the local L3Out connection.
PBR Policy on Border Leaf Nodes
Default Behavior:
PBR is not applied on Border Leaf nodes that receive external traffic.
Instead, policy enforcement happens on the compute leaf node where the destination is connected.
Alternative Scenario:
If PBR were applied on Border Leaf nodes:
Inbound traffic would be redirected to the anycast service in Pod2.
This would change the traffic path as shown in Figure 51.
Key Points
Multi-Pod Fabric:
Traffic Routing:
Inbound traffic might take a longer path if it enters through the wrong pod.
Outbound traffic uses a more direct path by applying PBR locally.
Policy Enforcement:
Typically done on compute leaf nodes, not on border leaves.
Ensures traffic is properly directed to anycast services based on policies.
Anycast Service:
A local service that can handle traffic efficiently within the pod.
Layer 3 Domain:
The external network that the Multi-Pod fabric communicates with.
Figure 51: Applying PBR Policy on Border Leaf Nodes for Inbound Traffic
Outbound Flow:
The Policy-Based Routing (PBR) policy is applied on the compute leaf node.
This selects the local anycast service in Pod1.
Traffic Handling:
This setup is not problematic.
Intra-cluster redirection ensures traffic is sent through the firewall node in Pod2.
The firewall node maintains the connection state for that specific communication.
Figure 52: Using Intra-Cluster Redirection for Outbound Flows
Traffic Optimization:
North-south traffic (traffic between the cluster and external networks) can be optimized.
This removes the need for traffic to loop back (hairpin) across the Inter-Pod Network (IPN).
Host-Route Advertisement:
Advertises host-route information into the Wide Area Network (WAN).
Supported starting from Cisco ACI release 4.0.
Allows host routes to be advertised through an L3Out connection on border leaf nodes.
Figure 53: Optimizing Inbound Traffic with Host-Route Advertisement on Border Leaf Nodes
Older Releases:
Host-route advertisement was only possible when using GOLF L3Outs.
GOLF L3Outs:
A specific model for deploying L3Out connections.
Shown in Figure 54.
Optimized Inbound Traffic:
Inbound traffic flows are improved by using host-route advertisements on border leaf nodes.
Figure 54: Optimized Inbound Traffic Flows with GOLF L3Outs
Inbound and Outbound Flows:
Focuses on optimizing the inbound traffic path.
Outbound flows are also optimized by using L3Out connections local to each pod.
Benefits:
Eliminates the need for traffic to hairpin across the IPN.
Ensures efficient routing of both inbound and outbound traffic.
Key Points
PBR Policy on Border Leaf Nodes:
When applied to border leaf nodes for inbound traffic, it redirects traffic to the anycast service in Pod2.
Intra-Cluster Redirection:
Manages traffic flow within the cluster, ensuring it passes through the appropriate firewall nodes.
Host-Route Advertisement:
Enhances traffic routing by advertising specific routes to the WAN.
Supported in newer Cisco ACI releases and with GOLF L3Outs in older versions.
Traffic Flow Optimization:
Both inbound and outbound traffic paths are optimized to avoid unnecessary looping.
Uses local L3Out connections to streamline traffic routing within the pod.
Comments