Welcome back to my short series on our journey of testing Nutanix and Mellanox. Following up on Part 1 of the Nutanix and Mellanox Series, I’m going to dive deeper into the Nutanix network configuration for use with the Mellanox SX1012.
Nutanix Network Configuration
Following the Nutanix AHV Best Practices Guide for networking (image below), we want to make sure that the CVM and VMs are using the 10Gbe networking (they also recommend to leave the 1Gb connections unplugged if not needed).
If you read Part 1 of this series, since the dedicated IPMI port is 100MB only and the Mellanox switch does not support 100MB on the ports, I’m going to use the shared 1Gb port for IPMI and simulate User VM traffic. So, in order to accomplish as close to Nutanix best practices as I can, I am going to do the following steps (watch the video by Jason Burns of Nutanix here to see what we’re doing – in fact check out his entire Light Board Series!):
- Create Second Bridge for 1Gb interfaces
- Update Bridge0 to use ONLY 10Gbe interfaces
- Update Bridge1 to use ONLY 1Gb interfaces
- Change Bond configuration to use balance-slb for load balancing
- Check MAC Address learning to make
With AHV by default all interfaces are within the same bond, which uses an Active/Backup bond, where only a single nic is active at a given time – so you can’t guarantee that CVM/VM traffic will only use the 10Gbe adapters by default. This is why we want to split the 10Gbe interfaces from the 1Gb interfaces, pretty simple concept.
In addition to the separate bonds, we want to make sure we’re actually taking advantage of our dual 10Gbe links upstream, which is why the active/backup bonding configuration isn’t optimal, we’re wasting 50% of our potential upstream bandwidth. AHV
bonds will support both Balance-SLB as well as LACP, in our case here we’re going to stick with Balance-SLB, with a possible update in the near future.
Running Commands against Nutanix CVM and AHV
As we can see, the allssh
command runs our command and outputs the results from each of the CVMs. We can also use the allssh
command to run commands against the AHV hosts directly, as shown below. Notice the command uses root@192.168.5.1, which based on our image above is the direct connection between eth1 (CVM) and vnet1 (AHV).
So let’s get started with the reconfiguration of the 2nd host!
Creating Second Bridge for 1Gb Interfaces
allssh
command via the CVM, or login directly to each host – the easier method is via CVM of course.
allssh “ssh root@192.168.5.1 "ovs-vsctl add-br br-1gb”"
manage_ovs show_uplinks
command to see what actually happened. Noticewe now have our second bridge named br-1gb configured on each host, but there aren’t any interfaces associated to it.
Modify Bridges to separate interfaces
Now that we’ve created our second bridge specifically for 1Gb interfaces, we need to modify Br0 to use only the 10Gbe interfaces. As from the last image, we can see that eth0,eth1,eth2 and eth3 are all associated to the bridge br0. Our goal is to have the 10Gbe interfaces associated to the first bridge, and the 1Gb interfaces associated to the second.
mange_ovs show_interfaces
to see the interfaces and link speeds.manage_ovs
has a parameter named interfaces, which allows you to specify 10g or 1g. First, we want to force ONLY the 10Gbe interfaces to be associated to bridge br0. To do this, run the command allssh manage_ovs --bridge_name br0 --bond_name bond0 --interfaces 10g update_uplinks
.allssh manage_ovs --bridge_name br-1gb --bond_name bond1 --interfaces 1g update_uplinks
.AHV Bond Configuration
allssh “ssh root@192.168.5.1 “ovs-appctl bond/show bond0””
allssh “ssh root@192.168.5.1 “ovs-vsctl set port bond0 bond_mode=balance-slb””
Validating our bond configuration is as simple as running the bond/show bond0
command again. Repeat the previous command for bond1 to change the load balancing model there as well.
To set the rebalance interval on the bonds, run the following command on a CVM for each bond:
allssh “ssh root@192.168.5.1 “ovs-vsctl set port bond0 other_config:bond-rebalance-interval=60000””
Validating Interface Configuration via MAC Addresses
- Ping IPMI address of 10.0.101.11
- Ping AHV address of 10.0.101.21
Nutanix and Mellanox Series:
- Nutanix and Mellanox – A Journey
- Part 1: Lab Setup and Networking
- Part 2: Nutanix Network Configuration
- Part 3: Acropolis Data Protection Configuration
- Part 4: Prism Central Deployment and Configuration