I’m a fan of NSX.

Ever since I deployed it for the first time, and got it working, I realize the power, AND ease of what it would provide.

I’ve had VMware NSX deployed in my lab for a while now, but I wanted to migrate my vSphere environment over to utilizing NSX fully for all VM’s, minus vCenter, the PSC, etc.

At the time, I never put much thought into how I deployed NSX, just got it install, working and done. I decided since I’m starting the process of rebuilding my lab (again…), to document the process of getting it installed.

This will be part 1 of my documentation process of deployment of NSX in my home lab.

Lab Background

  • My lab currently consists of 3 ESXi 6.0 Hosts, running on homebuilt white boxes. Each host has a single E3 Quad Core CPu.
  • Each host has 32gb memory, a SSD for Pernix FVP and all storage exists on a Synology DS1513+ array.
  • I’ve got 2 clusters, one for Compute and 1 for Management. 2 hosts in Compute, 1 host in Management
  • Lab networking currently consists of an ASA 5515-X at the edge, with Catalyst 3750 and 3650 switches handling all connectivity, and using OSPF for dynamic routing – more to come on this later.

Deploying NSX

Nothing secret here, I’ve followed the process for getting NSX deployed, with a few tweaks to make it manageable in my own network. First off, with the most recent release of NSX (Build 6.2), the memory requirements were increased from 8/12gb ram to 16gb ram. In a production environment, that’s not a big deal , but in a lab, space is at a premium. NSX Manager was deployed into my management cluster, I’ve found that lowering the memory for NSX Manager down to 8GB, it still runs just fine, just balks at you in the console. Once deployed, I configured NSX Manager to for my vCenter PSC for SSO, and for vCenter itself.

NewImage

I also configured NTP so everything is consistent for time across the environment.

Preparing ESXi for NSX

One of the best features of NSX in my opinion – and there are many, is the Distributed Firewall that you get once the hosts have been prepared for NSX. The DFW doesn’t require the NSX controllers to be in place, VXLAN to be configured, or any other NSX component. It’s just a .vib that get’s loaded on the hosts, and the NSX manager pushes the firewall rules down to the hosts.

NewImage

This has been the first step I’ve done when I’ve deployed NSX, I find it an easy function to be able to knock out, with no reboot required for the hosts. Once the Hosts have been prepared, Firewall rules can be configured as necessary.

As an example (screenshot below), I’ve blocked pings to the Cluster level (not individual hosts). This functionality is fantastic, as any host you might add to the clusters, requires no additional modification on the DFW, it inherits the rules immediately.

NewImage

Deploying ControllersThe NSX Controller are the “control plane” for NSX, managing all information about the network, virtual machines, etc. It exists as a VM, and VMware recommends that they are deployed in a minimum of 3 for high availablility.

Deploying the controllers is done entirely thru the Web Client, and is completed in a few simple steps:

  1. Create master controller
  2. Deploy additional controllers

The controller does not require many configuration items, but it does require that an IP Pool be setup prior to adding a controller (you may create an ip pool at the time of creating the first controller though).

Note that when the controllers are deployed, they do have some interesting resource requirements. They are deployed using 4 vCPU and 4GB ram each.

Once the controllers are deployed, they will appear under the Controller nodes section.

Once each of the controllers has been deployed, we can check the cluster status, and validate our cluster:

Finalize Host Preparation
After we’ve deployed our controllers, we can move onto the VXLAN configuration of the hosts, which is the final step in host prep.

The VXLAN configuration is where we configure the VTEP, or VXLAN Tunnel Endpoint. The VTEP takes a data packet, adds some VXLAN configuration items, and sends it on it’s way to the receiving VTEP. This allows us to limit the underlying physical network changes required to support overlay technologies such as NSX.

On each cluster that we will be utilizing NSX, we need to configure each host with a VTEP. This is done on the Host Preparation tab of the NSX section in the vSphere Web Client.

Logical Network Preparation
Next steps for NSX deployment is the Logical Network Preparation. Here, we have 3 tabs to work thru. First is the VXLAN transport, which should have already been taken care of for us during host prep. Second is the Segment ID, which allows you to identify which VXLAN network segments will be managed by the NSX Manager. In my lab, I’ve used the values 6000-6500, VMware recommends starting at 5000+.
Finally is the Transport Zone, which defines the span of the Logical switch (VMware’s words, not mine). In most cases, a single Transport Zone is required. Depending on how the physical network is configured, using Unicast for the replication mode does not require any additional physical network changes, while choosing Multicast (requires PIM/IGMP on the first hop switch) or Hybrid (requires IGMP but not PIM on first hop switch) does require physical changes.

So at this point, we’ve got our initial configuration items setup to start utilizing VMware NSX. The hosts are prepared, the VXLAN configuration items are complete, but we’re still not really using NSX just yet.

In Part 2, I’ll cover the following:

  1. Creating Logical Switches – What provides our virtual Layer 2 and Layer 3 functionality
  2. Deploying our Edge devices – Edge Services Gateway for NAT, Firewall, VPN and Load Balancing; and the Logical Router, which provides our Layer 3 routing and bridging functionality.
  3. Getting Traffic into and out of NSX networks

Thanks for reading, any feedback is always welcome!

Deploying NSX in a Home Lab – Part 1
Tagged on:         

Leave a Reply

Your email address will not be published. Required fields are marked *