I recently deployed a new Nexus 93180YC-EX switch into my home lab, to replace the aging 9372PX. Sure, for a home lab this was fine, but I wanted to get up to some 25Gbe speeds! I’ve got various equipment connected to that old Nexus, with 2 Nutanix clusters and single VMware cluster, plus various other things, nothing too difficult to move at all.
Migrating from the 9372PX to the 93180YC-EX was fairly simple, the most cumbersome part was migrating the FEX from the old switch to the new switch. Then I started the code upgrades, as the switch was on an older v7 of NX-OS, and the recommended release for this model was 10.3(6). So off I went, and the next morning, I woke up, got some coffee, headed to my office to get ready for a demo, and noticed that my primary Nutanix Cluster was offline, but everything else was fine. CIMC showed that the host was up, but it wasn’t pingable. Ok, let’s troubleshoot.
Background
My primary Nutanix cluster is comprised of three (3) Cisco UCS M5SX nodes with a VIC 1457, while my secondary Nutanix cluster is a NX-1065-G5. In both cases, each node has dual 10gb uplinks to the Nexus, and nothing had changed other than swapping out the switch.
I noticed that the switch ports for the UCS servers were showing an odd status, with Eth1/11 showing this message: Ethernet1/11 is down (dcxNoACKin100PDUs), while all the ports on the secondary Nutanix cluster were normal connected status. Again, nothing changed on the connectivity for these nodes, and the port configs were exactly as they were on the previous switch. Hmmm, interesting… Why is this only impacting the UCS nodes, and none of the other servers.
I shut/unshut the ports for the UCS servers, and miraculously the ports went back to connected, and I was able to access the cluster again. Success! For approx 50 minutes… Then, lost connectivity again…

Troubleshooting
As I went through my troubleshooting, understanding that shutting the port down and re-enabling it seemed to fix it, I started to wonder had I been lucky this whole time, or was it a switch issue – this was a brand new’ish switch I had just received. Good there, that checked out, moved onto cabling. Same SFP’s, same fiber, that all checked out. Took the SFP and fiber from working NX nodes and swapped over to the UCS nodes, worked up until about 50 minutes. Ok, so what gives!
Since all troubleshooting activities seem to start, end or somehow circle back to the Google magic, I dropped the log entry from the switch into the search, and low and behold got a few hits. Some didn’t align to my issue, but then I came across one that referenced DCBX, which related to my error message. For background, DCBX is short for Data Center Bridging Exchange Protocol, and per Cisco:
DCBX is a discovery and capability exchange protocol that is used by devices enabled for Data Center Ethernet to exchange configuration information. The following parameters of the Data Center Ethernet features can be exchanged:
- Priority groups in ETS
- PFC
- Congestion notification
- Applications
- Logical link down
- Network interface virtualization
So moving on from what DCBX is to why it’s impacting me, took a bit longer. I saw references to FCOE, which I am definitely not running in my lab. so mark that one off. Then I went to the Cisco Bug Search tool, which usually saves my bacon on very odd things, and this time was no different. I came across CSCwn90781, which gave me the info I needed. On the prior 9372 switch, I wasn’t running 10.x NX-OS code, which if you caught in my intro, was the recommended code for this new switch. Well now, this gives me something. This switch went from 7.x to 9.x over 2 days with no issues, and then when 10.x landed, boom…
Symptom: After performing an upgrade to 10.x code, occasionally an interface that is connected to a device (in the case of this bug, a UCS server MLOM adapter) that is configured to negotiate DCBXP CIN may be incorrectly brought up, even though CIN mode is not supported on N9K. Since the link comes up and we do not support CIN, DCBX ACK fails and after roughly 50 minutes (or 30 seconds per LLDP packet x 100 retries) the link will go into err-disabled state with the error: %ETHPORT-5-IF_DOWN_ERROR_DISABLED: Interface Ethernet1/X is down (Error disabled. Reason:DCX-No ACK in 100 PDUs)
Great, so now we know why it’s happening, but now let’s figure out how to change it. The workarounds provided gave 2 options, first being to disable LLDP on the interface if DCBX isn’t needed, or change the mode. Well I want LLDP enabled on the port so I can leverage that integration with other systems and with the Nutanix Prism interface Network visibility, so that was out. I hopped into CIMC on the hosts, and noticed that Enable LLDP was enabled on the VIC. Interesting…

Another post had a workaround that IF LLDP was needed on the VIC, to enable FIP mode, so that DCBX could correctly negotiate. Ok, so let’s see how that works out for me, using a test host..
After enabling Enable FIP Mode and power cycling the host and waiting the 50 minutes, I noticed that the port remained stable with connectivity, however visibly the results were not the normal LLDP output. But it was working!
Ok, so FIP mode helped, but I don’t like the display on the LLDP neighbors, so I didn’t like that option. I also didn’t want to disable LLDP at the port level, so that option was also out. I got thinking, why do I need LLDP enabled at the VIC level on these hosts, when AHV was participating natively in LLDP, as highlighted by the Network Visualiation capabilities. So I tested another scenario out by unchecking Enable LLDP on the VIC and power cycling the host, and waiting the 50 minutes to see what the results were. I also unchecked Enable FIP Mode and Enable LLDP on the test host, so all 3 hosts were configured identically.
Results
So after changing all 3 nodes, bouncing the ports, walking away for some lunch and a bit of fresh air, I came back to see how it was going. Great news, by disabling LLDP at the VIC level, after 2 days now, I’ve had zero stability issues with the UCS nodes on the new switch, and we’re still seeing LLDP details about the hosts!