<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Tech Tips on Thoughts and Ramblings by Mike</title><link>https://mikedent.io/tags/tech-tips/</link><description>Recent content in Tech Tips on Thoughts and Ramblings by Mike</description><generator>Hugo -- gohugo.io</generator><language>en</language><copyright>Mike Dent</copyright><lastBuildDate>Sun, 03 May 2026 09:00:00 -0400</lastBuildDate><atom:link href="https://mikedent.io/tags/tech-tips/index.xml" rel="self" type="application/rss+xml"/><item><title>Updating SSL Certificates for CML</title><link>https://mikedent.io/post/2026/5/updating-ssl-certificates-for-cml/</link><pubDate>Sun, 03 May 2026 09:00:00 -0400</pubDate><guid>https://mikedent.io/post/2026/5/updating-ssl-certificates-for-cml/</guid><description>
&lt;p&gt;If you have a Cisco Modeling Labs appliance in your lab or running on a piece of dedicated hardware, you have probably noticed two browser warnings every time you log in. CML ships with self-signed certs on both the main web UI and the Cockpit management UI, and your browser will complain about both. Cisco publishes an &lt;a href="https://developer.cisco.com/docs/modeling-labs/installing-ssl-certificate/"&gt;official guide for installing an SSL certificate on CML&lt;/a&gt;, and it is a solid starting point, but in my own runs it did not get me 100% of the way to the outcome I wanted. The procedure focuses on the nginx side, leaves Cockpit's quirks largely unaddressed, and does not cover renewal, rollback, or any pre and post checks. The helper script in this post fills those gaps so a single command handles the install, the renewal six months from now, and a rollback if something goes sideways. This post walks through what the script does, how to use it, and how to keep things tidy when your wildcard cert renews.&lt;/p&gt;</description></item><item><title>Creating AHV-Ready Windows ISOs with Embedded VirtIO Drivers</title><link>https://mikedent.io/post/2025/11/ahv-ready-windows-iso-virtio-drivers/</link><pubDate>Tue, 25 Nov 2025 11:02:40 -0500</pubDate><guid>https://mikedent.io/post/2025/11/ahv-ready-windows-iso-virtio-drivers/</guid><description>
&lt;p&gt;Deploying Windows VMs on Nutanix AHV requires VirtIO drivers that aren't included in the standard Windows installation media. This means manually loading drivers during setup or mounting driver ISOs after installation. I've built a PowerShell tool that simplifies this process by injecting the VirtIO drivers directly into your Windows ISO, complete with both a GUI for ease of use and a CLI for automation.&lt;/p&gt;</description></item><item><title>Issues with DCBX and LLDP on NX-OS 10.x</title><link>https://mikedent.io/post/2025/03/nexus-lldp-issues/</link><pubDate>Fri, 14 Mar 2025 08:00:00 -0400</pubDate><guid>https://mikedent.io/post/2025/03/nexus-lldp-issues/</guid><description>
&lt;p&gt;I recently deployed a new Nexus 93180YC-EX switch into my home lab, to replace the aging 9372PX. Sure, for a home lab this was fine, but I wanted to get up to some 25Gbe speeds! I've got various equipment connected to that old Nexus, with 2 Nutanix clusters and single VMware cluster, plus various other things, nothing too difficult to move at all.&lt;/p&gt;
&lt;p&gt;Migrating from the 9372PX to the 93180YC-EX was fairly simple, the most cumbersome part was migrating the FEX from the old switch to the new switch. Then I started the code upgrades, as the switch was on an older v7 of NX-OS, and the recommended release for this model was 10.3(6). So off I went, and the next morning, I woke up, got some coffee, headed to my office to get ready for a demo, and noticed that my primary Nutanix Cluster was offline, but everything else was fine. CIMC showed that the host was up, but it wasn't pingable. Ok, let's troubleshoot.&lt;/p&gt;</description></item><item><title>Understanding DNS Changes in Nutanix DR Recovery Plans</title><link>https://mikedent.io/post/2024/10/understanding-dns-changes-in-recovery-plans/</link><pubDate>Fri, 25 Oct 2024 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2024/10/understanding-dns-changes-in-recovery-plans/</guid><description>
&lt;p&gt;&lt;strong&gt;Updated 10.31.24! I modified the script from using the vm_recovery.bat to do everything, to calling a powershell script since that gave some extra flexibility in validating the current IP and DNS settings.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;One of the best features of the Nutanix DR Recovery Plan is the ability to automate IP address changes based on failover criteria. Whether using Async, Near Sync or Sync rep, you have the ability to create a recovery plan that will automate the failover of specific VM(s), or by using a category to capture a group of VMs. I always try to use a Category when possible, to remove any possibilities of missing VMs that I do want to failover. The recovery plan allows me to set a power on sequence to the VMs that are part of the recovery plan, as well as modifying the network association and IP address to match the recovery site.&lt;/p&gt;</description></item><item><title>Weekly Tech Tip: Check your FEC!</title><link>https://mikedent.io/post/2024/07/check-your-fec/</link><pubDate>Fri, 26 Jul 2024 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2024/07/check-your-fec/</guid><description>
&lt;h1 id="connectivity-issues-between-cohesity-c5016-nodes-and-nexus-93180yc-fx3h-switches"&gt;Connectivity issues between Cohesity C5016 Nodes and Nexus 93180YC-FX3H Switches&lt;/h1&gt;
&lt;p&gt;Very recently, I was deploying a new Cohesity C5016 appliance with 25Gb NICs, connecting up to a pair of Nexus 93180YC-FX3H switches. When using the 9K's in a VPC pair, my personal preference is to configure the Cohesity nodes with LACP to get the most bandwidth possible (regardless if it's 10Gb or 25Gb connectivity). Nothing super creative there, and I've done this dozens of times in the past with no issue, on both the Cohesity appliances and Nexus 9k's. But this time, it was different...&lt;/p&gt;</description></item><item><title>Weekly Tech Tip: Nutanix Centralized Local Password Management</title><link>https://mikedent.io/post/2024/07/weekly-tech-tip-nutanix-centralized-local-password-management/</link><pubDate>Fri, 12 Jul 2024 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2024/07/weekly-tech-tip-nutanix-centralized-local-password-management/</guid><description>
&lt;h1 id="simplify-and-secure-centralized-password-management-with-prism-central"&gt;&lt;strong&gt;Simplify and Secure: Centralized Password Management with Prism Central&lt;/strong&gt;&lt;/h1&gt;
&lt;p&gt;In the ever-evolving landscape of IT security, managing passwords across various platforms can be a daunting task. However, Nutanix has released a Centralized Local Password Management feature designed to centralize password management, ensuring a more secure and standardized approach for organizations. We'll dive into how this feature can simplify your password management strategy and bolster your security posture.&lt;/p&gt;
&lt;h3 id="the-challenge-of-distributed-password-management"&gt;&lt;strong&gt;The Challenge of Distributed Password Management&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Before we explore the solution, it’s essential to understand the problem. Managing local account passwords across multiple systems and platforms can be chaotic. Without a centralized management system, organizations often face:&lt;/p&gt;</description></item><item><title>Securing Local Administrator Passwords</title><link>https://mikedent.io/post/2024/02/securing-local-administrator-passwords/</link><pubDate>Sat, 03 Feb 2024 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2024/02/securing-local-administrator-passwords/</guid><description>
&lt;p&gt;In today's rapidly evolving digital landscape, maintaining a robust security posture is imperative for businesses and organizations of all sizes. One essential aspect of this security posture is effective password management, which can often be overlooked.&lt;/p&gt;
&lt;p&gt;A question I ask my customers quite often scares me with the response.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Me:&lt;/em&gt;&lt;/strong&gt; How are you managing the local administrator passwords within your environment?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Response:&lt;/em&gt;&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Option 1: We use a consistent local admin password across all of our servers for easy access.&lt;/p&gt;</description></item><item><title>Mastering Maintenance Mode Operations: Part 2 - vSphere</title><link>https://mikedent.io/post/2024/01/mastering-maintenance-mode-operations-in-nutanix-a-guide-for-ahv-and-esxi-part-2/</link><pubDate>Sat, 27 Jan 2024 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2024/01/mastering-maintenance-mode-operations-in-nutanix-a-guide-for-ahv-and-esxi-part-2/</guid><description>
&lt;p&gt;Welcome back to my series on maintenance operations with Nutanix! In &lt;a href="https://mikedent.io/post/2023/12/mastering-maintenance-mode-operations-in-nutanix-a-guide-for-ahv-esxi/"&gt;Part 1&lt;/a&gt;, I reviewed some scenarios on using maintenance mode with Nutanix AHV, using both Maintenance Mode functions within Prism Element and the CLI.&lt;/p&gt;
&lt;p&gt;As a quick recap from Part 1, you can use both the GUI functionality in AOS 6.x+ to place a host in maintenance mode or use the CLI commands for a variety of reasons, including but not limited to the following:&lt;/p&gt;</description></item><item><title>Mastering Maintenance Mode Operations in Nutanix: A Guide for AHV and ESXi</title><link>https://mikedent.io/post/2023/12/mastering-maintenance-mode-operations-in-nutanix-a-guide-for-ahv-esxi/</link><pubDate>Sat, 23 Dec 2023 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2023/12/mastering-maintenance-mode-operations-in-nutanix-a-guide-for-ahv-esxi/</guid><description>
&lt;p&gt;As with years past, as we come to a close on year’s end, I take some time to clean up the As-Built documentation templates I maintain for deployments; as part of this activity, I always come across outdated sections based on current feature sets, or areas that I think can use some additional content.&lt;/p&gt;
&lt;p&gt;This week, I noticed that my documentation about placing hosts into maintenance mode with Nutanix, whether running AHV or ESXi, is a bit outdated and could use some touching up.&lt;/p&gt;</description></item><item><title>FMCv 7.2 Upgrade Gotchas on AHV</title><link>https://mikedent.io/post/2022/07/fmcv-7-2-upgrade-gotchas/</link><pubDate>Fri, 01 Jul 2022 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2022/07/fmcv-7-2-upgrade-gotchas/</guid><description>
&lt;p&gt;Posting this more as a note to myself as a reminder and to also read the release notes a bit more carefully!  After recently going thru an upgrade of the Firepower Management Center from 7.0.x to 7.2 FMCv, specifically on the Nutanix AHV platform I ran into a bug where the VM would not boot after the upgrade.  &lt;/p&gt;
&lt;p&gt;While the upgrade completed, the VM stalled at boot,  and then finally booted.  However there was no network access and couldn't log in via console, which was odd.  Thought to myself well here's a scenario where I'm glad I know I've got a backup!&lt;/p&gt;</description></item><item><title>Zerto Linux ZVM - Finally Available!</title><link>https://mikedent.io/post/2022/06/zerto-linux-zvm-finally/</link><pubDate>Wed, 08 Jun 2022 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2022/06/zerto-linux-zvm-finally/</guid><description>
&lt;p&gt;Finally, Zerto has released a Linux based appliance for the Zerto Virtual Manager role. Finally!&lt;/p&gt;
&lt;p&gt;Now to be clear, I don't mean to use the term &lt;strong&gt;finally&lt;/strong&gt; in a negative sense with Zerto, more of a sense of happiness that it's now available with the latest release of Zerto 9.5.&lt;/p&gt;
&lt;p&gt;Since there's still some limitations around the Linux version of the ZVM, I went ahead and deployed into a greenfield deployment that will have a limited scope of replication to begin with, giving us some time to see the evolution of the appliance over the next few quarters.&lt;/p&gt;</description></item><item><title>Nutanix CE Refresh with Internal SSD Upgrade</title><link>https://mikedent.io/post/2020/02/nutanix-ce-refresh-with-internal-ssd/</link><pubDate>Mon, 17 Feb 2020 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2020/02/nutanix-ce-refresh-with-internal-ssd/</guid><description>
&lt;p&gt;It’s been a while since I’ve gotten any fresh content on this blog, hopefully I’ll get some content ideas to keep a regular cadence of updates going.&lt;/p&gt;
&lt;p&gt;While I was updating the cabling on the garage lab, I realized it had been a while since I had done anything on my CE lab from a version perspective, in fact the last update I had done was March of 2019.  So I figured now was as good a time as any to go ahead and upgrade the CE cluster.&lt;/p&gt;</description></item><item><title>Nutanix Frame Deployment: Part 1 - The Setup</title><link>https://mikedent.io/post/2019/05/nutanix-frame-deployment-part-1-the-setup/</link><pubDate>Tue, 21 May 2019 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2019/05/nutanix-frame-deployment-part-1-the-setup/</guid><description>
&lt;p&gt;&lt;strong&gt;Updated 5.22.19&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Coming back from the Nutanix .Next conference two weeks ago, the biggest announcement that really got me excited as the ability for Nutanix Frame to run in AHV environments.   AHV comes as an additional environment to AWS where Frame started, Azure and Google Cloud, currently in early release.&lt;/p&gt;
&lt;p&gt;I’ll be going thru a multi-part series around Frame and configurations use cases. So stay tuned!&lt;/p&gt;
&lt;p&gt;During the Frame Deep Dive session at .Next, I loved the &lt;strong&gt;What If&lt;/strong&gt; question that kept coming up around VDI.   What if VDI could be simple yet effective.  Secure and resilient.  Scalable and User Friendly.  I’d like to rephrase that question from &lt;strong&gt;What If&lt;/strong&gt; to &lt;strong&gt;Why not?&lt;/strong&gt;  With Frame, I think we have a solution that can meet the majority of use cases, while still providing fast setup and powerful desktops for end users.  As the graphic below shows, it’s as simple doing 1-5 .&lt;/p&gt;</description></item><item><title>Accelerate Migration to Nutanix AHV with Move</title><link>https://mikedent.io/post/2019/03/accelerate-the-migration-to-nutanix-ahv-with-move/</link><pubDate>Fri, 15 Mar 2019 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2019/03/accelerate-the-migration-to-nutanix-ahv-with-move/</guid><description>
&lt;p&gt;If you haven’t taken a close look at the hypervisor from Nutanix, AHV, well you might be missing out on something very valuable – that you already have access to as a Nutanix customer. AHV addresses the majority of the use cases people require with virtualization, and it does so very well with a simple deployment, simple management and POWERFUL features when Prism Central is added (and still powerful when it’s not).&lt;/p&gt;</description></item><item><title>Installing Nutanix CE 5.6 with ISO Installer</title><link>https://mikedent.io/post/2018/06/installing-nutanix-ce-version-2018-05-01-with-iso-installer/</link><pubDate>Fri, 22 Jun 2018 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2018/06/installing-nutanix-ce-version-2018-05-01-with-iso-installer/</guid><description>
&lt;p&gt;Nutanix CE Version 5.6 is out, and it’s hot!!!&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;picture&gt;
&lt;img
loading="lazy"
decoding="async"
alt=""
class="image_figure image_internal image_unprocessed"
src="https://mikedent.io/post/2018/06/installing-nutanix-ce-version-2018-05-01-with-iso-installer/post/2018/06/installing-nutanix-ce-version-2018-05-01-with-iso-installer/images/nu-community.webp"
/&gt;
&lt;/picture&gt;
&lt;/figure&gt;
&lt;br&gt;
With the release of Nutanix Community Edition version 5.6, Nutanix has also provided a new installation mechanism as an alternative to the previous dd imaging method, now allowing for a .iso installer.&lt;/p&gt;
&lt;h3 id="previous-ce-installs"&gt;Previous CE Installs&lt;/h3&gt;
&lt;p&gt;With previous Nutanix CE installs, it required the use of the dd utility to take the .img file and write this out to either a USB drive or another boot drive.&lt;/p&gt;</description></item><item><title>Nutanix Deployment with NVIDIA M60 GPU</title><link>https://mikedent.io/post/2018/01/nutanix-deployment-with-nvidia-m60-gpu/</link><pubDate>Wed, 03 Jan 2018 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2018/01/nutanix-deployment-with-nvidia-m60-gpu/</guid><description>
&lt;p&gt;I recently had the opportunity to deploy 12 Nutanix nodes for a customer across 2 sites (Primary and DR), 6 of which were 3055-G5 nodes with dual NVIDIA M60 GPU cards installed and dedicated to running the Horizon View desktop VMs for this customer. This was my first experience doing a Nutanix deployment using the NVIDIA GPU cards with VMware, and thankfully there is plenty of documentation out there on the process.&lt;/p&gt;</description></item><item><title>ESXi Services Disabled in NCC Health Check</title><link>https://mikedent.io/post/2017/08/esxi-services-disabled-in-ncc-health-check/</link><pubDate>Thu, 10 Aug 2017 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2017/08/esxi-services-disabled-in-ncc-health-check/</guid><description>
&lt;p&gt;This week I had the pleasure of deploying 2 more Nutanix blocks on behalf of one our partners, who is now starting to highly recommend Nutanix for their customer deployments of critical systems.&lt;/p&gt;
&lt;p&gt;The installation was pretty vanilla, 3 NX-1065-G5 nodes at the Primary site and matching at the DR site.  For the VMware components, we went with the vCenter 6.5 appliance (I love the stability and speed of the 6.5 appliance by the way), and for the ESXi hosts we went with 6.5 (build 4887370).&lt;/p&gt;</description></item><item><title>Zerto Replication SQL Server Tuning: Lessons Learned</title><link>https://mikedent.io/post/2017/07/zerto-replication-sql-server-tuning-lessons-learned/</link><pubDate>Mon, 31 Jul 2017 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2017/07/zerto-replication-sql-server-tuning-lessons-learned/</guid><description>
&lt;p&gt;&lt;strong&gt;3.5 hours down to 6.5 minutes…&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Recently I went thru a project to get Zerto Replication up and running for a Emergency Dispatch Customer who was moving away from RecoverPoint and SRM in an effort to simplify and consolidate their DR runbooks.&lt;/p&gt;
&lt;p&gt;As part of this project, we created multiple VPGs to match up with their software solutions, protection around 5TB of total VM space.  The smaller VPGs consisted of small groupings of VMs, most of which ranged between 250 and 500GB of provisioned storage.    The 5th VPG was a large VPG, consisted of a heavily utilized Production SQL Server and Report SQL Server and had around 3.2TB of provisioned storage.&lt;/p&gt;</description></item><item><title>Deploying and Configuring Prism Central on AHV</title><link>https://mikedent.io/post/2016/09/deploying-and-configuring-prism-central-on-ahv/</link><pubDate>Thu, 22 Sep 2016 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2016/09/deploying-and-configuring-prism-central-on-ahv/</guid><description>
&lt;p&gt;Continuing our journey with testing out Nutanix AHV functionality for one of our partners, one of things we wanted to get deployed was Prism Central.    Prism Central is very similar to VMware’s vCenter, defining Prism Central as  “software provides centralized infrastrcuture management, one-click simplicity and intelligence for everyday operations.”&lt;/p&gt;
&lt;p&gt;The deployment and configuration of Prism Central differs slightly between ESXi/Hyper-V and AHV, but post deployment the configuration is similar.  Deploying Prism Central when using ESXi is pretty simple – just download the &lt;em&gt;.ova&lt;/em&gt; file and deploy onto the host, while for Hyper-V and AHV you need to create a VM and clone the disks for the VM.   Regardless of which platform you’re deploying Prism Central onto, it’s a very simple process to get up and running.&lt;/p&gt;</description></item><item><title>Acropolis Data Protection Configuration Guide</title><link>https://mikedent.io/post/2016/09/acropolis-data-protection-configuration/</link><pubDate>Wed, 21 Sep 2016 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2016/09/acropolis-data-protection-configuration/</guid><description>
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://mikedent.io/post/2016/09/deploying-and-configuring-prism-central-on-ahv/"&gt;Part 4: Prism Central Deployment and Configuration&lt;/a&gt;etween clusters for VM migration and disaster recovery testing.&amp;quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;date: 2016-09-21
featured: false
draft: false
toc: false
usePageBundles: true
categories:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;quot;Technology&amp;quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;tags:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;quot;acropolis&amp;quot;&lt;/li&gt;
&lt;li&gt;&amp;quot;ahv&amp;quot;&lt;/li&gt;
&lt;li&gt;&amp;quot;nutanix&amp;quot;&lt;/li&gt;
&lt;li&gt;&amp;quot;data-protection&amp;quot;&lt;/li&gt;
&lt;li&gt;&amp;quot;replication&amp;quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;p&gt;Welcome back to my series on our journey of testing Nutanix and Mellanox. Part 1 &amp;amp; 2 of the series focused on Nutanix AHV networking and integrating with Mellanox, so we’re going to shift in Part 3 and look at the AHV configuration for getting Data Protection going and performing a failover test.&lt;/p&gt;</description></item><item><title>Deploying Nutanix and Mellanox - Part 2</title><link>https://mikedent.io/post/2016/09/deploying-nutanix-and-mellanox-part-2/</link><pubDate>Wed, 21 Sep 2016 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2016/09/deploying-nutanix-and-mellanox-part-2/</guid><description>
&lt;p&gt;Welcome back to my Nutanix and Mellanox blog series! In Part 1, I went through the process of setting up the&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;quot;Technology&amp;quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;tags:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;quot;acropolis&amp;quot;&lt;/li&gt;
&lt;li&gt;&amp;quot;ahv&amp;quot;&lt;/li&gt;
&lt;li&gt;&amp;quot;mellanox&amp;quot;&lt;/li&gt;
&lt;li&gt;&amp;quot;nutanix&amp;quot;&lt;/li&gt;
&lt;li&gt;&amp;quot;networking&amp;quot;&lt;/li&gt;
&lt;li&gt;&amp;quot;configuration&amp;quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;p&gt;Welcome back to my short series on our journey of testing Nutanix and Mellanox.  Following up on Part 1 of the Nutanix and Mellanox Series, I’m going to dive deeper into the Nutanix network configuration for use with the Mellanox SX1012.&lt;/p&gt;</description></item><item><title>Deploying Nutanix and Mellanox - Part 1</title><link>https://mikedent.io/post/2016/09/deploying-nutanix-and-mellanox-part-1/</link><pubDate>Mon, 19 Sep 2016 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2016/09/deploying-nutanix-and-mellanox-part-1/</guid><description>
&lt;p&gt;As I wrote about in the last post that started our journey with Nutanix and Mellanox, we will be testing AHV DR replication for one of our partners while evaluating the use of the Mellanox SX switch platform for a lower cost 10/40Gbe switch.&lt;/p&gt;
&lt;p&gt;The NX-1050 Block was pre-configured at another location, so all network subnets will be recreated in this lab. The NX-3050 block is net new, and that will be configured onsite.&lt;/p&gt;</description></item><item><title>EMC Unity Install: Quick Setup Guide</title><link>https://mikedent.io/post/2016/08/emc-unity-install/</link><pubDate>Mon, 22 Aug 2016 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2016/08/emc-unity-install/</guid><description>
&lt;p&gt;I was fortunate to do my first EMC Unity install today (Unity 300 specifically), and Unity follows the path of the VNXe installation sequence, pretty easy. This blog post is about as short as the Unity install is 🙂&lt;/p&gt;
&lt;p&gt;Initializing the Unity array uses the ‘ _same’_Connection Utility that the VNXe uses, though the Unity Connection Utility does not support the VNXe, and vice versa for the latest VNXe Connection Utility.  So you can’t have both installed on the same machine.&lt;/p&gt;</description></item><item><title>Deploying NSX in a Home Lab - Part 2</title><link>https://mikedent.io/post/2016/04/deploying-nsx-in-a-home-lab-part-2/</link><pubDate>Fri, 01 Apr 2016 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2016/04/deploying-nsx-in-a-home-lab-part-2/</guid><description>
&lt;p&gt;It’s been over 6 months since I last had NSX working in my home lab, and with a rebuild I decided it was time to wrap up Part 2 of my NSX in a home lab blog post.&lt;/p&gt;
&lt;p&gt;In &lt;a href="http://34.207.103.27/2015/10/22/deploying-nsx-in-a-home-lab-part-1/"&gt;Part 1&lt;/a&gt; of my Deploying NSX series, we covered the prep of NSX in the environment, including deploying the NSX Manager appliance, deploying NSX Controllers and vSphere host preparation. In this part of the series, we’ll cover the creation of Logical Switches and our NSX Edge, which consist of our Edge Services Gateway (Providing DHCP, Firewall, VPN, NAT, Routing and Load Balancing capabilities). Part 3 will cover the deployment of the Logical Router, which provides our routing and bridging for the existing networks, as well as configuring routing to get traffic into and out of our new NSX environment.&lt;/p&gt;</description></item><item><title>Deploying NSX in a Home Lab - Part 3</title><link>https://mikedent.io/post/2016/04/deploying-nsx-in-a-home-lab-part-3/</link><pubDate>Fri, 01 Apr 2016 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2016/04/deploying-nsx-in-a-home-lab-part-3/</guid><description>
&lt;p&gt;Onto the Logical Router….&lt;/p&gt;
&lt;p&gt;In &lt;a href="http://34.207.103.27/2015/10/22/deploying-nsx-in-a-home-lab-part-1/"&gt;Part 1&lt;/a&gt; of my Deploying NSX series, we covered the prep of NSX in the environment, including deploying the NSX Manager appliance, deploying NSX Controllers and vSphere host preparation. In &lt;a href="http://34.207.103.27/2016/04/01/deploying-nsx-in-a-home-lab-part-2"&gt;Part 2&lt;/a&gt; this part of the series, we covered the creation of Logical Switches and our NSX Edge, which consist of our Edge Services Gateway (Providing DHCP, Firewall, VPN, NAT, Routing and Load Balancing capabilities). In our 3rd part in the series, we’ll cover the deployment of the Logical Router, which provides our routing and bridging for the existing networks, as well as configuring routing to get traffic into and out of our new NSX environment.&lt;/p&gt;</description></item><item><title>Trick to Horizon View EUC Access Point Deployment</title><link>https://mikedent.io/post/2015/11/trick-to-horizon-view-euc-access-point/</link><pubDate>Sat, 14 Nov 2015 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2015/11/trick-to-horizon-view-euc-access-point/</guid><description>
&lt;h3 id="finally"&gt;Finally!!&lt;/h3&gt;
&lt;p&gt;I’ve been trying to get the Horizon View EUC Access Point deployed in my home lab for a while now. No matter how I did it, I could just not get the Access Point to work correctly.&lt;/p&gt;
&lt;p&gt;I love the idea of the Access Point, being a simple ‘if it breaks, redeploy it’ method, but it really was making me wonder just how ready this was. Turned out, it was all on me…&lt;/p&gt;</description></item><item><title>Installing ManageEngine OpUtils on CentOS 7</title><link>https://mikedent.io/post/2015/11/installing-manageengine-oputils-on-centos-7/</link><pubDate>Fri, 13 Nov 2015 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2015/11/installing-manageengine-oputils-on-centos-7/</guid><description>
&lt;p&gt;&lt;strong&gt;Background&lt;/strong&gt;&lt;br&gt;
I’ve been using ManageEngine’s &lt;a href="https://www.manageengine.com/products/oputils/"&gt;OpUtils&lt;/a&gt; product for a few years now for IP Address Management (IPAM). While it has a lot of other great features, I’ve really liked the way they do IPAM. Yes, Microsoft has IPAM now built into Windows, but I’ve never liked the setup of the Windows IPAM configuration, and the lack of a good Web UI for IPAM made me like it even less.&lt;/p&gt;
&lt;p&gt;OpUtils provides a subset of the OpManager Suite from Manage Engine, and subsequently integrates into OpManager. OpUtils 8 runs on both Windows and Linux platforms, and I’ve always run it on Windows, a) because it’s easier to setup and get going, and 😎 it offers the ability to pull OS level information once you setup domain credentials thru WMI.&lt;/p&gt;</description></item><item><title>Deploying NSX in a Home Lab - Part 1</title><link>https://mikedent.io/post/2015/10/deploying-nsx-in-a-home-lab-part-1/</link><pubDate>Thu, 22 Oct 2015 00:00:00 +0000</pubDate><guid>https://mikedent.io/post/2015/10/deploying-nsx-in-a-home-lab-part-1/</guid><description>
&lt;h3 id="im-a-fan-of-nsx"&gt;&lt;strong&gt;I’m a fan of NSX.&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Ever since I deployed it for the first time, and got it working, I realize the power, AND ease of what it would provide.&lt;/p&gt;
&lt;p&gt;I’ve had &lt;a href="https://www.vmware.com/products/nsx"&gt;VMware NSX&lt;/a&gt; deployed in my lab for a while now, but I wanted to migrate my vSphere environment over to utilizing NSX fully for all VM’s, minus vCenter, the PSC, etc.&lt;/p&gt;
&lt;p&gt;At the time, I never put much thought into how I deployed NSX, just got it install, working and done. I decided since I’m starting the process of rebuilding my lab (again…), to document the process of getting it installed.&lt;/p&gt;</description></item></channel></rss>