top of page

MultiCast In ACI

  • Writer: Mukesh Chanderia
    Mukesh Chanderia
  • 4 days ago
  • 8 min read

Let's understand how multicast actually works inside a Cisco ACI fabric. It starts with a quick primer on general multicast terms, then shows how those pieces are mapped to ACI objects and—step by step—how a single data packet travels from source to receiver.


1 Multicast fundamentals

Element

What it does

Multicast group (G) – e.g. 239.1.1.1

A logical “radio station” that many hosts can tune in to.

IGMP v2/3

Host-to-first-hop signalling: receivers send Join/Leave reports; a querier asks “who’s still listening?”.

PIM-SM / SSM

Routing protocol that stitches trees between first-hop and last-hop routers, using an RP (shared tree, *,G) or straight to the source (S,G).

Replicating the packet

Classic L3 devices replicate hop-by-hop. VXLAN fabrics like ACI replicate once in the spine ASIC and spray finished copies to the interested leaves.


2 Key ACI objects & terms you’ll see

Classic term

ACI term / object

Multicast group (G)

Exactly the same inside the VXLAN header

Subnet / VLAN

Bridge Domain (BD) – owns the IGMP snooping policy

Router interface

VRF – owns the PIM/TRM policy

MRIB entry

EVPN route-types 6 & 7 that a leaf advertises to the spines

Data-plane tree

GIPo address (Group-IP-outer) carried in VXLAN outer header; spines replicate on this tree

GIPo

  • Every BD (and every multicast-enabled VRF) gets a /28 multicast block, e.g. 225.0.18.0/28.

  • When a leaf encapsulates a multi-destination frame it picks one of the 16 addresses in that block; this balances traffic across 16 FTAG fabric trees.


3 Layer-2 multicast (same BD, “bridged” multicast)


Receiver → IGMP Join → Leaf → COOP → Spines

Source   → Packet     → Leaf → VXLAN (dest = BD-GIPo) → Spines replicate → Interested Leaves → Port(s)


  1. Receiver Join

    Host sends IGMP. Leaf snoops it, installs a hardware entry, and informs the spines via a COOP message that “Leaf 101 is interested in G 239.1.1.1”


  2. Data forward

    Source leaf encapsulates the frame in VXLAN; outer-IP = BD-GIPo, VNID = BD-VNID.Spines replicate the packet only to leaves that previously registered interest (efficient “Optimized Flood” mode).


  3. Last-hop delivery

    Each egress leaf decapsulates and, using its local IGMP table, forwards the frame out the exact access ports that sent joins.


No PIM is involved; everything is L2 within the BD.


4 Layer-3 multicast in ACI – Tenant Routed Multicast (TRM)


Enabling PIM on the BD flips multicast to an overlay-routed model that looks like this:


Receiver Leaf

 └─ IGMP Join → EVPN RT-7 → Border Leaf(s)

                                             ┌─ (S,G) Join toward source/RP via PIM

Source Leaf ─► VXLAN (dest = VRF-GIPo) ─► Spines ─┤

                                             └─ Spines replicate to Receiver Leaf(s)

What changes compared with pure L2?

Component

L2 only

TRM / PIM enabled

Outer multicast tree

BD-GIPo

VRF-GIPo (so traffic can leave the BD)

Control-plane advert

None (spine IGMP DB only)

EVPN RT-7 (receiver) & RT-6 (source)

PIM speakers

None

Border-leaf runs full PIM; non-border leaves run passive PIM.

External connectivity

Not possible

Border-leaf joins RP or source in the outside network

Join signalling step-by-step

  1. IGMP Join on receiver leaf → leaf creates EVPN IMET (RT-7) message that tells all border leaves “this VRF has receivers for G”.

  2. Border leaf (Designated Forwarder) converts that knowledge into a PIM (*,G) or (S,G) join out of the VRF’s L3Out.

  3. Source traffic enters the fabric, is VXLAN-encapped toward VRF-GIPo. Spines replicate copies only to leaves whose VRF has an interested receiver.

Because the packet is VXLAN-encapped once, the underlay never has to run PIM—only the border leaves talk PIM to outside devices. That’s why TRM scales far better than a legacy “multicast in the underlay” design.


5 Putting it all together – end-to-end flow recap


IGMP (receiver)        EVPN RT-7            PIM (*,G)               Data plane

┌─────────┐          ┌────────┐          ┌─────────┐          ┌──────────────────┐

│Receiver │─IGMP→│Rx Leaf │─RT7→│Border LF│─PIM→  RP / Src │  │VXLAN dest = GIPo │

└─────────┘          └────────┘          └─────────┘          └──────────────────┘

                                                                     ▲

                                   EVPN RT-6 (source)                │

                                   ┌────────┐                        │

                                   │Src Leaf│─RT6────────────────────┘

                                   └────────┘

  • G state is synchronised fabric-wide through EVPN;

  • replication happens once in the spines using the right GIPo tree;

  • only the leaves that truly need the stream receive it.


6 Where to look when something breaks

Plane

CLI to start with

What success looks like

IGMP snooping

show ip igmp snooping groups vrf <vrf>

Group + ingress port listed

EVPN

`show bgp l2vpn evpn route-type 6

7 ip `

PIM / TRM

show ip pim vrf <vrf> route <G>

Correct RPF & OIF list; fabric-tunnel appears

Spine replication

`acidiag mcast if

inc `

7 Design & operating tips


  • Always enable an IGMP querier when no external router does that job.

  • Prefer Optimized Flood mode on the BD unless you run first-generation leaf hardware with strict limits. citeturn3view0

  • For TRM, give every border leaf a unique loopback and keep RP placement close to the fabric to minimise join latency. citeturn1view0

  • Upgrade to a modern 5.x/6.x ACI release if you need IPv6 multicast or Source-Specific Multicast (SSM) in the overlay.


Key Points


L2 multicast in ACI is just IGMP + spine replication on the BD-GIPo tree.

L3 multicast (TRM) adds EVPN signalling and PIM only at the border, using VRF-GIPo trees.

Once you map those two modes to the control-plane messages (IGMP → COOP → EVPN → PIM) the entire troubleshooting workflow becomes predictable and fast.



TROUBLESHOOTING MULTICAST ISSUES


Multicast Troubleshooting Steps


Step-by-step guide to help troubleshooting multicast issues in Cisco ACI focusing on common issues: missing multicast group mapping, phantom RPF, underlay PIM configuration, and multicast policies.


Key checks include:

  • Verifying IGMP

  • Confirming BD in multicast flood mode

  • Reviewing PIM settings in the underlay

  • Checking MP-BGP route types 6/7

  • Verifying leaf and spine outputs for multicast traffic


Step-by-step multicast troubleshooting


Multicast modes in ACI. Then, I'll break it down by steps:

  • Step 0: Identify the scenario, including where the source and receiver are, and whether the traffic is L2 or L3.

  • Step 1: Confirm BD and VRF multicast configuration, and check IGMP snooping policy settings.

  • Step 2: Verify underlay multicast settings, especially head-end replication and the difference between 'Encap' and 'Optimized' flood modes.


Flow --> define the problem → verify L2 multicast → verify L3 multicast (TRM/PIM) → look at the underlay → common fixes.


0 Define the exact scenario first

What to record

Why it matters

Source (IP/MAC, Leaf, interface, EPG/BD/VRF)

Needed for RPF checks

Receiver(s) (IP, Leaf, interface, EPG/BD/VRF)

Needed to see where the join should appear

Group address (G) (e.g. 239.1.1.1)

Drives all subsequent look-ups

L2 or L3 delivery? (same BD vs routed between BDs/VRFs)

Decides whether IGMP-only or PIM/TRM is required

External receivers?

Tells you if you need L3Out multicast


1 Verify the Bridge Domain & IGMP snooping (Layer-2 multicast)

  1. Open the BD in Tenant ▶ Networking ▶ Bridge Domains

    • Flood mode should be “Optimized Flood” or “Encap Flood” for multicast.

    • If the BD will never leave the fabric, you can stay with L2 flooding.

    • If L3 forwarding is expected, be sure “Enable Unicast Routing” is on.

  2. IGMP Snooping policy

    • Tenant ▶ Policies ▶ Protocols ▶ IGMP Snooping

    • Typical fixes:

      • Enable Querier if there is no external querier in that subnet.

      • Raise robustness-variable if you expect lossy links.

    • Commit the policy and re-map it to the BD/EPG if necessary.


CLI quick-check


# On the source Leaf

show ip igmp snooping groups vrf PROD | inc 239.1.1.1

show endpoint ip 239.1.1.1 detail

If the group is absent, the Leaf never saw an IGMP join; check the receiver port or querier.


2 Check EVPN multicast routes & COOP (fabric control-plane)


ACI distributes multicast state with EVPN route-types 6/7 plus the GIPo.The COOP database ties endpoint locations to the spine loopback that replicates the traffic.


CLI quick-check


# On any Leaf

show bgp l2vpn evpn route-type 6 ip 239.1.1.1

show system internal epm multicast detail | inc 239.1.1.1

Healthy: you see <Leaf ID, BD VNID, GIPo, Replication-Spines>.If nothing appears, the Leaf never exported state—igmp snooping or BD settings are still wrong.


3 If the traffic must be routed (different BDs or external networks)

Enable Tenant Routed Multicast (TRM) or classic PIM

Step

What to do

Where

1

Create / choose a Multicast Domain object

Tenant ▶ Networking ▶ Multicast

2

Under the VRF, add PIM – Enabled → choose RP-policy (Anycast-RP is fine)

Tenant ▶ Networking ▶ VRF

3

Under each BD that must route multicast, tick “Enable PIM”

Tenant ▶ Networking ▶ BD

4

If you need external receivers, extend the L3Out and enable PIM on the interface

Tenant ▶ Networking ▶ External Routed Networks

Cisco calls this whole workflow TRM. ACI injects (S,G) or (*,G) into EVPN and handles replication in the spines.


L3 CLI health-check


show ip pim vrf PROD route 239.1.1.1

show ip pim vrf PROD rp mapping

The RP address must appear on every Leaf that holds the VRF; RPF interface should be “fabric” or the expected routed link.


4 Check the underlay replication (Spines)

Even if the control plane looks fine, congestion or hardware issues on replication spines can drop multicast.


CLI quick-check


# Any Spine

acidiag mcast if | inc 239.1.1.1

show system internal mcglobal info

A missing interface indicates that spine never programmed the group; usually caused by COOP errors or a spine-leaf overlay split-brain (clear-clock + COOP clear can help).


5 Common break-fix patterns

Symptom

Likely cause

Quick fix

Group never shows on Leaf

Missing IGMP Querier

Enable querier in BD or L3 gateway

Group shows, but no traffic

Underlay head-end replication bug in older code < 5.2(4)

Upgrade, or move BD to Encap Flood as a workaround

Traffic works inside ACI but not outside

External PIM domain has wrong RP / RPF failure

Check show ip pim vrf OUT route, fix RP or add MSDP peer

Packet bursts then stop

IGMP version mismatch (V2 vs V3), or IGMP throttling on Leaf

Force correct version in IGMP policy

Receivers across BDs drop frames

BD missing “Enable PIM”

Tick the box & commit; verify EVPN RT-6 export

6 End-to-end worked example


Scenario:

Source: 10.1.1.10 (Video-Srv) on Leaf101, EPG Video-Src, BD Video-BD, VRF PRODReceiver: 10.1.1.50 (Video-Cli) on Leaf102, EPG Video-Cli, same BD/VRFGroup: 239.1.1.1


  1. Receiver join

    Leaf102# show ip igmp snooping groups vrf PROD | inc 239.1.1.1 Vlan382 239.1.1.1 0100.5e01.0101 port1/3

  2. Source registration

    Leaf101# show endpoint ip 239.1.1.1 detail # shows (MAC, VNID, GIPo=225.0.0.37)

  3. EVPN distribution

    Leaf102# show bgp l2vpn evpn route-type 6 ip 239.1.1.1 * i [2]:[0]:[VNID]:[225.0.0.37]:[239.1.1.1]:[10.1.1.10]

  4. Replication spine check

    Spine201# acidiag mcast if | inc 239.1.1.1 239.1.1.1 VNID 382 L101,VPC102

  5. Packet capture confirms traffic enters Leaf102 on P-port1/3 → success.


7 Handy command cheat-sheet


# Layer-2 / IGMP

show ip igmp snooping groups vrf <VRF>

show system internal epm multicast detail


# EVPN / COOP

show coop internal info | inc 239.

show bgp l2vpn evpn route-type 6 ip <G>


# Layer-3 / PIM

show ip pim vrf <VRF> route <G>

show ip pim vrf <VRF> rp mapping

show ip mroute vrf <VRF>


# Spines / Replication

acidiag mcast if

show system internal mcglobal info


Common Issues


  • You see COOP timeouts, repeated duplicate GIPo entries, or spines crash the multicast process.

  • Multicast drops only at fabric load > 70 Gbps (possible ASIC bug—TAC has diagnostics).




Resources :








Recent Posts

See All
Quality of Service (QoS) in Cisco ACI

Configuring Quality of Service (QoS)  in Cisco ACI (Application Centric Infrastructure)  involves creating and applying QoS policies that...

 
 
 

Commentaires


Follow me

© 2021 by Mukesh Chanderia
 

Call

T: 8505812333  

  • Twitter
  • LinkedIn
  • Facebook Clean
©Mukesh Chanderia
bottom of page