As I wrote about in the last post that started our journey with Nutanix and Mellanox, we will be testing AHV DR replication for one of our partners while evaluating the use of the Mellanox SX switch platform for a lower cost 10/40Gbe switch.
The NX-1050 Block was pre-configured at another location, so all network subnets will be recreated in this lab. The NX-3050 block is net new, and that will be configured onsite.
This post will serve as the initial setup of the lab testing environment and topology.
Lab Setup
Here is the setup we’ll be using for the lab testing
- 1 Nutanix NX-1050 Block with 4 Nodes, each wtih 256GB ram
- 1 Nutanix NX-3050 Block with 4 Nodes, each with 256GB ram
- 1 Mellanox SX1012 Half Width 40GBe Switch
- 1 Cisco Catalyst 3850 Switch with 10Gbe module (Core Switching)
- 1 Cisco Catalyst 3560 Switch (IPMI)
The topology of the layout is below.
The dedicated IPMI port on the Nutanix blocks is only 100MB, so that meant that the IPMI port couldn’t be plugged into the SX1012. I could use the Shared 1Gb port for IPMI and VM traffic, but I was limited on the number of QSFP to SFP+ modules that I was able to get from Mellanox. I was only sent two (2) QSFP to SFP+ modules, so that really limited my ability to use the Shared 1Gb port on the SX1012 as well.
I decided to use a separate switch for dedicated IPMI and/or 1Gb Shared connections, giving some flexibility in connectivity.
Lab Networking Configuration
Since this will remain somewhat of an isolated network outside of the Core Lab network, the Mellanox SX1012 will provide all Layer 3 functions to the VLANs contained with the Nutanix environment. Setting up Layer 3 interfaces on the Mellanox isn’t available within the current version (3.6.1002), so all L3 configuration is done via CLI.
Doing the L3 addressing is pretty much the same as on Cisco: Create your VLAN, Create your VLAN Interface, and assign an ip.
##
## L3 configuration
##
vrf definition management
ip routing vrf default
interface ethernet 1/12 no switchport force
interface loopback 0
interface vlan 100
interface vlan 101
interface vlan 112
interface vlan 113
interface vlan 911
interface vlan 912
interface vlan 915
interface vlan 920
interface ethernet 1/12 ip address 172.255.255.2 255.255.255.252
interface loopback 0 ip address 10.254.254.5 255.255.255.255
interface vlan 100 ip address 10.0.100.1 255.255.255.0
interface vlan 101 ip address 10.0.101.1 255.255.255.0
interface vlan 112 ip address 10.0.112.1 255.255.255.0
interface vlan 113 ip address 10.0.113.1 255.255.255.0
interface vlan 911 ip address 10.0.11.1 255.255.255.0
interface vlan 912 ip address 10.0.12.1 255.255.255.0
interface vlan 915 ip address 10.0.15.1 255.255.255.0
interface vlan 920 ip address 10.0.20.1 255.255.255.0
ip route vrf default 0.0.0.0 0.0.0.0 172.255.255.1
The Default VRF will have it’s default route pointing up to the LabCore3850 switch via a 40Gbe QSFP to 10Gbe cable on interface E1/12, using a /30 network as the transit. Note from the image above, a 1Gb trunk connects the SX1012 to the LabMGMT3560 switch, which is specifically for IPMI and/or 1Gb connections.
On the SX1012, we’ll be using the 40Gbe QSFP to 10Gbe breakout cable to take the single QSFP port down to both Nutanix blocks. Ports 1, 3, 8 and 10 will have the QSFP adapter plugged in, so in order for the breakout cable to fully work, we’ll need to modify the port to have it provide the split ports out to the Nutanix blocks.
Modifying the interface is done by running the below command (another Mellanox quirk, not available via the Web GUI), ran on each interface.
interface ethernet 1/x module-type qsfp-split-4 force
Once the command has been run, you can see from the image below the port is now split into 4. Once split, interfaces can be referred to as E1/1/1, E1/1/2, E1/1/3, E1/1/4 for example.
One thing that took me a little bit to get used to on the Mellanox platform, was the difference between the terminology of Trunk and Hybrid. I was setting up the interfaces as Trunk ports for all the VLANs, but noticed that I still didn’t have connectivity even after providing the allowed vlans. First off, when setting an interface as a Trunk in the GUI there isn’t an option for Native VLAN (since I needed to tag the native vlan for the existing Nutanix Block), but when setting an Access VLAN and selecting ALL actually made the Access VLAN field value N/A.
Based on this Mellanox note, I found that when you set a port as Trunk ALL traffic must be tagged, and the access vlan option really does nothing. Mellanox has another port type called Hybrid, which allows you to provide an untagged VLAN and tagged VLANs on the same port. Once I got all of the ports set as Hybrid, traffic to the Nutanix blocks worked correct.
So now we’ve got our Ports configured on the Mellanox switch, as well as VLANs and SVI’s.
Stay tuned for Part 2, where we jump into Nutanix Networking to accomodate the 10Gbe networking!
Nutanix and Mellanox Series:
- Nutanix and Mellanox – A Journey
- Part 1: Lab Setup and Networking
- Part 2: Nutanix Network Configuration
- Part 3: Acropolis Data Protection Configuration
- Part 4: Prism Central Deployment and Configuration