Deploying Nutanix and Mellanox - Part 1
As I wrote about in the last post that started our journey with Nutanix and Mellanox, we will be testing AHV DR replication for one of our partners while evaluating the use of the Mellanox SX switch platform for a lower cost 10/40Gbe switch.
The NX-1050 Block was pre-configured at another location, so all network subnets will be recreated in this lab. The NX-3050 block is net new, and that will be configured onsite.
This post will serve as the initial setup of the lab testing environment and topology.
Lab Setup
Here is the setup we'll be using for the lab testing
- 1 Nutanix NX-1050 Block with 4 Nodes, each wtih 256GB ram
- 1 Nutanix NX-3050 Block with 4 Nodes, each with 256GB ram
- 1 Mellanox SX1012 Half Width 40GBe Switch
- 1 Cisco Catalyst 3850 Switch with 10Gbe module (Core Switching)
- 1 Cisco Catalyst 3560 Switch (IPMI)
The topology of the layout is below.
Lab Topology
The dedicated IPMI port on the Nutanix blocks is only 100MB, so that meant that the IPMI port couldn't be plugged into the SX1012. I could use the Shared 1Gb port for IPMI and VM traffic, but I was limited on the number of QSFP to SFP+ modules that I was able to get from Mellanox. I was only sent two (2) QSFP to SFP+ modules, so that really limited my ability to use the Shared 1Gb port on the SX1012 as well.
Lab Networking Configuration
For the Mellanox SX1012, I used MGMT port to connect to my management network to allow me to configure the switch. All network interfaces use LACP, and I enabled MLNX-OS.
As you can see I setup VLAN 1001 and VLAN 1002 for the two Nutanix Clusters. I'll be placing NX-1050 on the VLAN 1001 and NX-3050 on VLAN 1002.
I had to configure some of the QSFP+ ports for breakout, due to cabling and limitations on number of adapters.
To configure a QSFP+ port for breakout, from the Mellanox CLI, type the following command:
interface ethernet 1/x module-type qsfp-split-4 force
Once the command has been run, you can see from the image below the port is now split into 4. Once split, interfaces can be referred to as E1/1/1, E1/1/2, E1/1/3, E1/1/4 for example.
One thing that took me a little bit to get used to on the Mellanox platform, was the difference between the terminology of Trunk and Hybrid. I was setting up the interfaces as Trunk ports for all the VLANs, but noticed that I still didn't have connectivity even after providing the allowed vlans. First off, when setting an interface as a Trunk in the GUI there isn't an option for Native VLAN (since I needed to tag the native vlan for the existing Nutanix Block), but when setting an Access VLAN and selecting ALL actually made the Access VLAN field value N/A.
Based on this Mellanox note, I found that when you set a port as Trunk ALL traffic must be tagged, and the access vlan option really does nothing. Mellanox has another port type called Hybrid, which allows you to provide an untagged VLAN and tagged VLANs on the same port. Once I got all of the ports set as Hybrid, traffic to the Nutanix blocks worked correct.
So now we've got our Ports configured on the Mellanox switch, as well as VLANs and SVI's.
Stay tuned for Part 2, where we jump into Nutanix Networking to accomodate the 10Gbe networking!
Nutanix and Mellanox Series: