top of page
Writer's pictureMukesh Chanderia

ACI Vmware DVS

Updated: Oct 29

APIC integrates with 3rd party VMMs


Three options with Virtual VMM:


  • VMware DVS

  • Cisco ACI Virtual Edge

  • Cisco Application Virtual Switch (It's now End of Life)



Ensure VMware and ACI version support.


Vmware Settings You should not change



VMM Domain Integration with ACI


Step 1: Create pool for VMM domain


Fabric > Access Policies > Pools. Right-click VLAN and choose Create VLAN Pool.


In the Range field, enter values 100 and 199. Click OK.




Step 2: Create vCenter Domain


Go to Virtual Networking > VMware. Right-click VMware and choose Create vCenter Domain


In the Virtual Switch Name field, enter vCenter_VMM. In the Virtual Switch field, choose the VMware vSphere Distributed Switch option. From the VLAN Pool drop-down menu, choose vCenter_VLANs.



Note: We have not yet attached AAEP profile.


Now scroll down and configure Port Channel - Mode On & select CDP.




Step 3: Expand the VMM domain, right-click Controllers, and choose Create vCenter Credential.




Step 4: Set the Credential



Step 5: LAUNCH VSPHERE CLIENT (HTML5). Log in to the vSphere web client as administrator@dc.local


In the vSphere Web Client, go to Networking, expand the data center DC, and verify that no VDS exists.



You should not see a VDS because you still need to complete the VMM domain configuration on the APIC. Therefore, the VDS still needs to be pushed to the vCenter.


Step 6: In the APIC GUI, in the Controllers menu, choose Create vCenter Controller.



Note: The name of Datacenter must be exactly the same as been configured in vSphere.


Step 7: Verify the VDS that the APIC has pushed.



You should see a VDS with the name of the configured vCenter domain (vCenter_VMM) within a folder of the same name. The VDS includes two networks that have been automatically created.


Step 8: Verify that CDP has been enabled on this VDS in both directions




Step 9: In the APIC UI, create a new AAEP.



Fabric > Access Policies > Policies > Global, right-click Attachable Access Entity Profiles, and choose Create Attachable Access Entity Profile.






Step 10: Associate HOST_AAEP AAEP to your VPC policy group Leaf101..102:1:03_VPCIPG and save the configuration.




Step 11: Associate VMM Domain to App_EPG


Application Profiles > eCommerce_AP > Application EPGs. Right-click the App_EPG and choose Add VMM Domain Association.



VMM Domain Profile drop-down menu, select vCenter_VMM and click Submit


Step 12: In DB_EPG and Web_EPG, use the same method to associate the EPGs with your VMM domain.


Step 13: In the vSphere client, within the VDS vCenter_VMM examine the port groups.


Examine the port groups by expanding the VDS or in the Networks tab. Refresh the browser if needed.



Step 14: Add ESXi Host to the VDS (Optional)


In the vSphere Web Client, go to Networking. Right-click the created VDS and choose Add and Manage Hosts.





Note : There is alternate way to configure Vmware Host in ACI as well


Step 15: Select your host to be added to the distributed switch.




Step 16: Don’t add any physical adapters. In this scenario, there is no hardware fabric.




Step 17 : Confirm that the host has no physical network adapters attached to the fabric.




Step 18 : Do NOT assign any VMkernel network adapters to the distributed switch



Step 19 : Do not migrate any virtual machines or network adapters to the distributed switch.



Step 20: On the (6) Ready to complete page, click Finish.



Step 21: In the vSphere Web Client, manage a host added to your new VDS.



Step 22: On the (1) Select task page, select Manage host networking and click Next.




Step 23: On the (2) Select hosts page, select your ESXi (10.10.1.1). Click Next.




Step 24: Do not add any physical adapters.




Step 25: Do not assign any VMkernel network adapters to the distributed switch.



Step 26 : Perform these assignments:


  • APP_VM to Sales|eCommerce_APP|App_EPG

  • DB_VM to Sales|eCommerce_APP|DB_EPG

  • WEB_VM to Sales|eCommerce_APP|Web_EPG


On the (5) Migrate VM networking page, check the Migrate virtual machine networking check box. Then, navigate to the Configure per Virtual Machine tab. Click the double arrow in front of the Network Adapter 1 for APP_VM to open a Select Network field. Click Assign to assign the adapter to their respective EPG-backed port groups, in this case, Sales|eCommerce_APP|App_EPG. Click the double arrow again to close the Select Network field.




Repeat the procedure for DB_VM and WEB_VM. You must go to the second page of the Virtual machine list to find WEB_VM.


Step 27: On the (6) Ready to complete page, click Finish to migrate the virtual machines.




Alternate way to configure Vmware Host and Vms in EGP in vSphere


Step 1 : In the vSphere Client, go to Networking. Right-click the created VDS and choose Add and Manage Hosts.



Step 2 : At (1) Select task, keep the default selection: Add hosts. Click Next



Step 3 : At (2) Select hosts, click the + New hosts button. Choose your ESXi 192.168.10.62. Click OK and Next.



Step 4 : At (3) Manage physical adapters, choose vmnic4 and click the Assign uplink button. Select uplink1 and click OK.



Step 5 : Stay at (3) Manage physical adapters, choose vmnic5, and click the Assign uplink button. Select uplink2 and click OK. Then click Next.



Step 6 : At (4) Manage VMkernel adapters, click Next. You will only manage hosts and physical adapters.



Step 7 : At (5) Migrate VM networking, click Next. You will manually assign the VMs to port groups later.



Step 8 : At (6) Ready to complete, click Finish to add your host with two uplinks to the VDS.





Step 9 : In the vSphere Client, remain on your VDS and go to Configure > Topology. Expand the first two uplinks, click ... and View Settings. Examine the neighbor information in the LLDP tab. Vmnic4 and vmnic5 should be connected to leaf-a and leaf-b Eth 1/3.



Vmnic4 should be connected to leaf-a Eth 1/3:



Vmnic5 should be connected to leaf-b Eth 1/3:




Step 10 : Go to VMs and Templates, select the APP_VM, Right Click and click the Edit Settings button in the top taskbar.



Step 11 : In the Virtual Hardware tab, in the Network Adapter 1 row, open the drop-down menu. Choose Browse and select the Sales|eCommerce_AP|App_EPG port group. Click OK twice.



Step 12 : Use the same method to assign the WEB_VM to the Web_EPG port group.



Disable Neighbor Discovery and Verify Connectivity


The interface policy group Server_IPG, defined for vPC connection to the ESXi, uses an LLDP policy with LLDP enabled. You will remove the host from the VDS, configure LLDP and CDP policies with LLDP and CDP disabled, and assign them to the interface policy group. When you re-add the host to the VDS (with LLDP and CDP disabled on ACI leafs), you will re-test the VM connectivity.


We will see that all Endpoint learned from Vmware host got disappeared.


Also, In the vSphere Client, within your VDS, go to Configure > Topology. Expand the first two uplinks, click ... and View Settings. Verify that the neighbor information is missing in the CDP and LLDP tabs.



Recall that the resolution immediacy Immediate causes the EPG policies (such VLANs, contracts, and filters) to be downloaded to the leaf upon hypervisor attachment to the VDS. LLDP, CDP, or OpFlex are needed to resolve the hypervisor-to-leaf attachments. In this case, the resolution of the hypervisor-to-leaf attachment will fail.


One of two alternative settings, On Demand resolution immediacy causes the Cisco APIC to push the policy to the leaf when a hypervisor is attached to a VDS and a VM is placed in the port group (EPG). This variant also requires LLDP or CDP and will not remediate the current issue. To resolve the problem, you will use the Pre-provision resolution immediacy.


Pre-provision resolution immediacy (the default option) causes the policy to be downloaded to a leaf even before a hypervisor is attached to the VDS, pre-provisioning the configuration on the switch. You will enable this default option as a resolution for the disabled LLDP problem.


In VMM domain if we change Resolution Immediacy to Pre-provision & see if that resolves the issue.


Expand the Web_EPG, select Domains (VMs and Bare-Metals), double-click the VMM domain association, change the Resolution Immediacy to Pre-provision and click OK.


It will resolve the issue and endpoint will be learn again.


Conclusion LLDP/CDP is just for visibility,




Troubleshooting ACI Vmware Issues


 Helpful information must know before troubleshooting VMM connectivity issues in ACI.


Shard Leader Role: Only one APIC (shard leader) is responsible for sending configuration and collecting inventory for each VMM Domain.


Event Listening: Multiple APICs listen for vCenter events to ensure no events are missed if the shard leader fails.


APIC Structure: Each VMM Domain has one shard leader and two follower APICs.


Workload Distribution: Different VMM Domains may have the same or different shard leaders to balance the workload.




  1. VMWare VMM Domain - vCenter connectivity Troubleshooting




Go to Virtual Networking —> Inventory —> VMM Domains —> name of VMMDomain(VDS)—> Controllers —> You will find ip address of venter in “Address field”


Login as root


APIC# show vmware domain name VDS_Site1 vcenter "vcenterip"


Leader will show APIC with Shard Leader Role


a) Identifying the Shard Leader


APIC# show vmware domain name VDS_Site1


b) Verifying Connectivity to vCenter


apic1# ping 10.48.176.69


APIC# nslookup  vcenter


c) Check if OOB or INB is used


apic1#bash

admin@apic1:~>route


d) Ensure Port 443 is allowed between all APICs and the vCenter, including any firewalls in the path of communication.


vCenter <-> APIC - HTTPS (TCP port 443) - communication


apic2#curl -v -k https://1.1.1.1 [For http it's simple curl 1.1.1.1]


Rebuilt URL to: https://1.1.1.1   Trying

* TCP_NODELAY set

* Connected to 1.1.1.1 port 443 (#0)

...

Verify that the shard leader has an established TCP connection on port 443 using the netstat command.


apic1:~>netstat -tulaen | grep 1.1.1.1

tcp 0 0 1.1.1.1:40806 10.48.176.69:443 ESTABLISHED 600 13062800


2 . VMware Inventory Troubleshooting


Purpose of Inventory Sync: Inventory sync events help the APIC stay updated on important changes in vCenter that require policy updates.


Types of Sync Events: There are two types of inventory syncs: full inventory sync and event-based inventory sync.


Full Inventory Sync: This happens every 24 hours by default but can also be triggered manually.


Event-Based Inventory Sync: This occurs during specific events, like when a virtual machine moves (vMotion) between hosts connected to different leaf switches.


APIC Actions During VM Movement: If a VM moves to a different host, the APIC will update the necessary settings on the leaf switches involved.


Importance of Syncing: If the APIC fails to sync properly with vCenter, it can cause issues, especially with on-demand deployment settings.


Error Reporting: If the sync fails or is incomplete, an error message will identify the problem.


Scenario 1 : If a virtual machine is moved to a different vCenter or has an invalid configuration (such as being connected to an old or deleted DVS), the vNIC will be reported as having operational issues.


Fault fltCompVNicOperationalIssues


Rule ID:2842


Explanation:

This fault is raised when ACI controller failed to update the properties of a VNIC (for instance, it can.


Code: F2842

Message: Operational issues detected for VNic name on VM name in VMM controller: hostOrIp with name name


Resolution:

Remediate the virtual machines indicated in the fault by assigning a valid port group on the affected vNIC


Scenario 2 — vCenter Administrator Modified a VMM Managed Object on vCenter:


It is not supported to modify objects managed by the APIC directly from vCenter. If an unsupported action is taken on vCenter, a fault will be triggered.


Fault fltCompCtrlrUnsupportedOperation

Rule ID:133


Explanation:

This fault is raised when deployment of given configuration fails for a Controller.

Code: F0133


Message: Unsupported remote operation on controller: hostOrIp with name name in datacenter rootContName


Resolution:If this scenario is encountered, try to undo the unsupported change in vCenter and then trigger an 'inventory sync



VMWare VMM Domain - vCenter controller - trigger inventory sync



VMware DVS Version

When creating a new vCenter controller as part of a VMM Domain, the default setting for the DVS Version will be to use the "vCenter Default". When selecting this, the DVS version will be created with the version of the vCenter.



VMWare VMM Domain - vCenter controller creation


This means that in the example of a vCenter running 6.5 and ESXi servers running 6.0, the APIC will create a DVS with version 6.5 and hence the vCenter administrator will be unable to add the ESXi servers running 6.0 into the ACI DVS.


Symptoms 


APIC managed DVS - vCenter host addition - empty list



APIC managed DVS - vCenter host addition - incompatible hosts



Host Dynamic Discovery & it's Benefit 

ACI automatically detects where hosts and virtual machines (VMs) are connected, unlike manual provisioning.


Efficient Policy Deployment: ACI deploys policies only on the nodes where they are needed, optimizing the use of hardware resources on leaf switches.


Resource Optimization: VLANs, SVIs, and zoning rules are applied only when an endpoint requires them, improving hardware efficiency.


Ease of Use for Admins: ACI automatically provisions VLANs and policies where VMs connect, reducing manual work for network administrators.


Information Gathering: The APIC uses data from various sources to determine where policies need to be applied.


VMWare VMM Domain — Deployment workflow


  • LLDP or CDP Exchange: LLDP or CDP is exchanged between the hypervisor and leaf switches.

  • Hosts Report Adjacency: Hosts send adjacency information to vCenter.

  • vCenter Notifies APIC: vCenter informs the APIC about the adjacency information.

  • APIC Awareness: The APIC learns about the host through inventory sync.

  • APIC Pushes Policy: The APIC applies the necessary policy to the leaf port.

  • Policy Removal: If adjacency information from vCenter is lost, the APIC can remove the policy.


This highlights the critical role of CDP/LLDP in the discovery process, making it essential to ensure they are correctly configured and both sides are using the same protocol.


Use Case for LooseNode / Intermediate Switch

  • In setups with a blade chassis, there might be an intermediate switch between the leaf switches and the hypervisor.

APIC's Role:

  • The APIC needs to connect (or "stitch") the communication path between the leaf switches and the hypervisor.

Protocol Requirements:

  • Different discovery protocols might be needed because the intermediate switch may require different protocols than the host.

Detection and Mapping:

  • ACI can identify the intermediate switch (known as a LooseNode or "Unmanaged Fabric Node") and map out the hypervisors connected through it.

Viewing LooseNodes:

  • You can see the detected LooseNodes in the ACI interface under: Fabric > Inventory > Fabric Membership > Unmanaged Fabric Nodes



With LLDP or CDP discovery enabled, ACI can figure out the topology (network layout) for these LooseNodes. This works because the hypervisor connected through the intermediate switch is managed via VMM integration, and the leaf switch has a direct connection to the intermediate switch.



Resolution Immediacy


Critical Services and VMM-Integrated DVS:

  • When critical services, like management connectivity to vCenter/ESXi, use a VMM-integrated DVS, it's important to use the Pre-provision Resolution Immediacy setting.

Static Configuration:

  • This setting disables dynamic host discovery, meaning policies and VLANs are always statically configured on the host-facing interfaces.

VLAN Deployment:

  • VMM VLANs are permanently applied to all interfaces linked to the AEP (Attached Entity Profile) referenced by the VMM Domain.

Preventing VLAN Removal:

  • This setup ensures that critical VLANs, such as those for management, are not accidentally removed due to issues with discovery protocols.


Here's a summary of the resolution immediacy settings:


  • On-Demand: Policy is applied when a connection is made between the leaf switch and host, and a VM is attached to the port group.

  • Immediate: Policy is applied as soon as a connection is made between the leaf switch and host.

  • Pre-provision: Policy is applied to all ports associated with an AEP that includes the VMM Domain, without needing any connection.



Troubleshooting Scenarios 


VM Cannot Resolve ARP for its Default Gateway 


Here , VMM integration is set up, and the DVS is added to the hypervisor, but the VM can't connect to its gateway in ACI. To fix the network connectivity for the VM, make sure the connection (adjacency) is established and VLANs are properly deployed.


Start by checking if the leaf switch has recognized the host. You can do this by running the show lldp neighbors or show cdp neighbors command on the leaf switch, depending on which protocol is being used.



Leaf101#show lldp neighbors


Now from host also you can check dvs list


Root login


Host # esxcli network vswitch dvs vmware list


# esxcfg


The above information can be seen through GUI as well


vCenter Web Client - host - vmnic LLDP/CDP adjacency details



Check cdp or lldp


If the leaf switch doesn’t detect the LLDP connection from the ESXi host, it might be because the network adapter is generating LLDP packets instead of the ESXi operating system.


Check if the network adapter has LLDP enabled and is taking over all LLDP information.


If so, disable LLDP on the adapter so that it’s managed by the vSwitch policy.


Another possible issue could be a mismatch between the discovery protocols used by the leaf switch and the ESXi hypervisor. Make sure both sides are using the same discovery protocol.


To verify that the CDP/LLDP settings are correctly aligned between ACI and the DVS, go to the APIC UI and navigate to: Virtual Networking > VMM Domains > VMware > Policy > vSwitch Policy. Ensure that either LLDP or CDP is enabled, but not both, as they cannot be used together.



In vCenter go to: Networking > VDS > Configure.


vCenter Web Client UI - VDS properties



Correct the LLDP/CDP settings if needed.


Then validate the APIC observes the ESXi host's LLDP/CDP neighborship against the leaf switch in the UI under Virtual Networking > VMM Domains > VMWare > Policy > Controller > Hypervisor > General.


APIC UI - VMWare VMM Domain - Hypervisor details




If this is showing expected values, then the user can validate that the VLAN is present on the port toward the host.



S1P1-Leaf101# show vlan encap-id 1035



vCenter/ESXi Management VMK Attached to APIC-Pushed DVS


vCenter and ESXi Management Traffic: When using VMM integrated DVS, take extra care to avoid issues with activating dynamic connections and VLANs.


vCenter Setup:

  • vCenter is usually set up before VMM integration.

  • Use a physical domain and static path to ensure the vCenter VM’s VLAN is always active on the leaf switches, even before VMM integration is complete.

  • Keep this static path in place even after VMM integration to ensure the EPG is always available.


ESXi Hypervisors:

  • According to the Cisco ACI Virtualization Guide, when migrating to the vDS, ensure the EPG for the VMK interface is deployed with the resolution immediacy set to Pre-provision.


  • This ensures the VLAN is always active on the leaf switches, without depending on LLDP/CDP discovery from the ESXi hosts.



Scenario Overview:

  • When vCenter or ESXi management traffic needs to use the VMM-integrated Distributed Virtual Switch (DVS), extra precautions are necessary to prevent issues with activating dynamic connections and required VLANs.


For vCenter:

  • Use a Physical Domain and Static Path:

    • Since vCenter is typically set up before VMM integration, configure a physical domain and a static path.

    • This ensures the vCenter VM encapsulation VLAN is always programmed on the leaf switches.

    • Allows the VLAN to be used even before VMM integration is fully established.


  • Maintain Static Path After VMM Integration:

    • Even after setting up VMM integration, keep the static path in place.

    • Ensures continuous availability of this Endpoint Group (EPG).


For ESXi Hypervisors:

  • Follow Cisco's Virtualization Guide:

    • When migrating to the vDS (virtual Distributed Switch), ensure the EPG for the VMkernel (VMK) interface is deployed with the resolution immediacy set to Pre-provision.


  • Benefits of Pre-provision Setting:

    • Guarantees the VLAN is always programmed on the leaf switches.

    • Eliminates the need to rely on LLDP/CDP discovery of the ESXi hosts.


Host Adjacencies not Discovered Behind LooseNode


Common Causes of LooseNode Discovery Issues:


  1. CDP/LLDP is Not Enabled:

LLDP or CDP protocols must be enabled and actively exchanging information between the intermediate switch, the leaf switches, and the ESXi hosts.

In Cisco UCS environments, enable LLDP/CDP through a network control policy applied to the vNIC.


2. Change in Management IP Address:

If the management IP address of an LLDP/CDP neighbor changes, it can disrupt connectivity.

vCenter may detect the new management IP in the LLDP/CDP information but won't automatically update the APIC.

  • To fix this issue, manually trigger an inventory sync in the APIC.


    3.VMM VLANs Not Added to the Intermediate Switch:


The APIC does not automatically configure VLANs on third-party blade or intermediate switches.


You need to manually add the VMM VLANs to the intermediate switch.


The Cisco UCS Manager (UCSM) integration app, called ExternalSwitch, is available from release 4.1(1) to assist with this.


Ensure that VLANs are configured and properly trunked:

  • Uplinks: Connect to the ACI leaf nodes.

  • Downlinks: Connect to the hosts.



F606391 - Missing Adjacencies for the Physical Adapter on the Host 


It means there are missing CDP/LLDP adjacencies.



Hypervisor Uplink Load Balancing 


Connecting Hypervisors with Multiple Uplinks:

  • When you connect hypervisors like ESXi to an ACI fabric, they usually have multiple uplinks.

  • It's recommended to connect each ESXi host to at least two leaf switches.

  • This setup helps reduce the impact of failures or during upgrades.


Optimizing Uplink Usage with Load Balancing:

  • VMware vCenter allows you to configure different load balancing algorithms.

  • These algorithms optimize how virtual machine (VM) traffic uses the hypervisor's uplinks.


Importance of Consistent Configuration:

  • It's crucial that all hypervisors and the ACI fabric are set up with the same load balancing algorithm.

  • If they aren't aligned, it can cause intermittent traffic drops.

  • Mismatched configurations can also lead to endpoints moving unexpectedly within the ACI fabric.


This can be seen in an ACI fabric by excessive alerts such as fault F3083


ACI has detected multiple MACs using the same IP address 172.16.202.237.


Rack Server


Different methods for connecting an ESXi host to an ACI fabric, these methods are grouped into two main categories: switch-independent and switch-dependent load balancing algorithms.


  • Switch-Independent Load Balancing Algorithms:

    • These allow connections without requiring any special configurations on the switches.

    • The hypervisor manages load balancing internally, so the switches do not need to be aware of or participate in the load balancing process.


  • Switch-Dependent Load Balancing Algorithms:

    • These require specific configurations to be applied to the switches involved.

    • The switches play an active role in the load balancing process, necessitating coordination between the hypervisor and the switch settings.



Teaming and ACI vSwitch Policy

VMware Teaming and Failover Mode

ACI vSwitch Policy

Description

ACI Access Policy Group - Port Channel Required

Route based on the originating virtual port

MAC Pinning

Select an uplink based on the virtual port IDs on the switch. After the virtual switch selects an uplink for a virtual machine or a VMKernel adapter, it always forwards traffic through the same uplink for this virtual machine or VMKernel adapter.

No

Route based on Source MAC hash

NA

Select an uplink based on a hash of the source MAC address

NA

Explicit Failover Order

Use Explicit Failover Mode

From the list of active adapters, always use the highest order uplink that passes failover detection criteria. No actual load balancing is performed with this option.

No

Link Aggregation(LAG) - IP Hash Based

Static Channel - Mode On

Select an uplink based on a hash of the source and destination IP addresses of each packet. For non-IP packets, the switch uses the data at those fields to compute the hash. IP-based teaming requires that on the ACI

Yes (channel mode set to 'on')

VMware Teaming and Failover Mode

ACI vSwitch Policy

Description

ACI Access Policy Group - Port Channel Required



side a port-channel / VPC is configured with 'mode on'.


Link Aggregation(LAG) - LACP

LACP Active / Passive

Select an uplink based on a selected hash (20 different hash options available). LACP based teaming requires that on the ACI side a port-channel / VPC is configured with LACP enabled. Make sure to create an Enhanced Lag Policy in ACI and apply it to the VSwitch Policy.

Yes (channel mode set to 'LACP Active/Passive')

Route based on Physical NIC Load (LBT)

MAC Pinning - Physical-NIC-load

Available for distributed port groups or distributed ports. Select an uplink based on the current load of the physical network adapters connected to the port group or port. If an uplink remains busy at 75 percent or higher for 30 seconds, the host vSwitch moves a part of the virtual machine traffic to a physical adapter that has free capacity.






Teaming and ACI vSwitch Policy

  • VMware Teaming and Failover Modes:

    1. Route Based on the Originating Virtual Port (MAC Pinning):

      • Description:

        • Selects an uplink based on the virtual port IDs on the switch.

        • Once an uplink is chosen for a VM or VMKernel adapter, all traffic for that VM or adapter always uses the same uplink.

      • ACI Requirement: Not required.

    2. Route Based on Source MAC Hash:

      • Description:

        • Selects an uplink based on a hash of the source MAC address.

      • ACI Requirement: Not applicable.

    3. Explicit Failover Order:

      • Description:

        • Uses the highest priority uplink that is active and passes failover checks.

        • No load balancing is performed.

      • ACI Requirement: Not required.

    4. Link Aggregation (LAG) - IP Hash Based:

      • Description:

        • Selects an uplink based on a hash of the source and destination IP addresses of each packet.

        • For non-IP packets, uses available data fields to compute the hash.

      • ACI Requirement:

        • Yes, the channel mode must be set to 'on' in ACI.

    5. Link Aggregation (LAG) - LACP:

      • Description:

        • Selects an uplink based on a chosen hash method (20 different options available).

        • Requires LACP (Link Aggregation Control Protocol) to be enabled on the ACI side.

        • An Enhanced Lag Policy must be created in ACI and applied to the vSwitch Policy.

      • ACI Requirement:

        • Yes, the channel mode must be set to 'LACP Active/Passive'.

    6. Route Based on Physical NIC Load (LBT - Load-Based Teaming):

      • Description:

        • Available for distributed port groups or distributed ports.

        • Selects an uplink based on the current load of the physical network adapters connected to the port group or port.

        • If an uplink is busy at 75% or higher for 30 seconds, the host vSwitch moves some of the VM traffic to a less busy adapter.

      • ACI Requirement: Not specifically required, but helps in balancing traffic loads.

  • ACI vSwitch Policy Descriptions:

    1. ACI Access Policy Group - Port Channel Required:

      • Description:

        • For setups where a port-channel or Virtual Port Channel (VPC) is configured with 'mode on'.

        • Ensures that VLANs are consistently applied across all ports in the channel.

    2. Link Aggregation (LAG) - IP Hash Based:

      • Description:

        • Uses IP hash for selecting uplinks.

        • Requires static channel mode set to 'on' in ACI.

    3. Link Aggregation (LAG) - LACP:

      • Description:

        • Uses LACP for dynamic link aggregation.

        • Requires creating an Enhanced Lag Policy in ACI and applying it to the vSwitch Policy.

        • Ensures uplinks are managed with LACP Active/Passive settings.




  • Key Points to Ensure Proper Configuration:

    1. Consistency:

      • All hypervisors and the ACI fabric must use the same load balancing algorithm to ensure proper connectivity and traffic flow.

    2. Avoiding Traffic Issues:

      • Mismatched configurations can lead to intermittent traffic drops and unexpected endpoint movements within the ACI fabric.

    3. Optimizing Uplinks:

      • Proper load balancing ensures efficient use of uplinks and minimizes the impact of failures or maintenance activities.


By following these guidelines and ensuring that both VMware and ACI configurations are aligned, you can achieve reliable and efficient network connectivity for your hypervisors within the ACI fabric.


Cisco UCS B-Series Use Case


  • Connection Setup:

    • Cisco UCS B-Series servers connect to UCS Fabric Interconnects (FIs) within their chassis.

    • These FIs do not share a unified dataplane, which can cause differences in load-balancing methods between ACI leaf switches and vSwitches.


  • Key Points to Remember:

    • Port-Channels:

      • Each UCS Fabric Interconnect (FI) connects to ACI leaf switches using a port-channel.

    • FI Interconnections:

      • UCS FIs are directly connected to each other only for heartbeat (monitoring) purposes, not for handling data traffic.

    • vNIC Assignment:

      • Each blade server's virtual NIC (vNIC) is either:

        • Pinned to a specific UCS FI, or

        • Uses a path to an FI through UCS Fabric Failover (Active-Standby) mode.

    • Load Balancing Issues:

      • Using IP-hash algorithms on the ESXi host vSwitch can cause MAC address flapping on the UCS FIs, leading to network instability.





  • MAC Pinning Configuration:

    • Setup in ACI:

      • When MAC Pinning is configured in the Port-Channel Policy as part of the vSwitch Policy in ACI, it is reflected as "Route based on the originating virtual port" in the teaming settings of the port groups on the Virtual Distributed Switch (VDS).

    • How to View Configuration:

      • To verify this setup, navigate through the following path in VMware vCenter



VMWare vCenter — ACI VDS — Port Group — Load Balancing setting


This structured approach ensures that Cisco UCS B-Series servers are properly integrated with the ACI fabric, optimizing network performance and stability.











58 views0 comments

Recent Posts

See All

Comments


bottom of page