I ran into an issue recently during an installation at 2 locations, where both locations were using Cisco C220 Servers with the VIC 1227 installed, connecting to Nexus 9372PX switches.

Each C220 is configured with 2 2-Port 10Gbe adapter, one being a VIC 1227 and the other being an Intel X520 – topology shown below. Each of the servers is running the latest build of vSphere 6 on the hosts, with the latest enic and fnic drivers installed.

Occassionally, I found that sporadic ports would show down in the vSphere Client, and after doing a shut and no shut on the ports the results wouldn’t change.

Looking on each 9k, I’d see messages similar to the below that the link was flapping and eventually being shut. I did the normal troubleshooting of reseating the cables, swapping cables, etc. All to no change in results.

2016 Aug 17 18:08:33 RCSO-9K-B %ETHPORT-5-IF_DOWN_ERROR_DISABLED: Interface Ethernet1/6 is down (Error disabled. Reason:Too many link flaps)
2016 Aug 17 18:11:43 RCSO-9K-B %ETHPORT-5-IF_DOWN_ERROR_DISABLED: Interface Ethernet1/9 is down (Error disabled. Reason:Too many link flaps)
2016 Aug 17 18:13:33 RCSO-9K-B %ETHPORT-5-IF_ERRDIS_RECOVERY: Interface Ethernet1/6 is being recovered from error disabled state (Last Reason:Too many link flaps
)

Cursory searches on Google didn’t provide any results other than more troubleshooting. I noticed that the ports that were having issues were all located on the VIC 1227, so I checked the CIMC to make sure there wasn’t any misconfiguration on the ports, mac address conflicts, etc.

On a whim, I ended up checking the Cisco Bug Tracker for something related to the 9K’s, and jackpot! Cisco Bug CSCuy79306 details the issue where C-Series servers with a VIC 1225 or 1227 directly connected to a 9K exhibited link flaps.

The workaround provided in the Bug report that worked for me was to set a debounce timer to 3 seconds (max is 5 seconds) by issuing this command on all the trunk ports connected to the vSphere Hosts:

link debounce time 3000

This was an odd one to come across, I’ve done plenty of C-Series installs with 9K’s but not too many with the VIC 1227 (or 1225) installed. Thankfully, the Cisco Bug Tracker helped me out yet again, and we were able to put this issue to bed!

Cisco UCS VIC 1227 bug with Nexus 9Ks
Tagged on:         

Leave a Reply

Your email address will not be published. Required fields are marked *