Purely Nutanix: The FlashArray Integration Expands Customer Choice

Overview

When Nutanix and Pure Storage announced their partnership at .NEXT 2025 in May, it turned heads across the industry. Two companies that had previously competed were now working together to deliver something customers have been asking for: the operational simplicity of Nutanix Cloud Infrastructure combined with the raw performance of Pure Storage FlashArray.

Some might question the motives behind this partnership. Is it a defensive play? A response to market pressures? I see it differently. This integration continues both companies' commitment to putting customers first. Nutanix has always been about giving customers choice and simplifying infrastructure. Pure has built its reputation on performance and customer experience. Bringing those philosophies together isn't a contradiction; it's a natural evolution.

More importantly, this advances Nutanix's strategy to support all workloads. By pairing with Pure's FlashArray performance, Nutanix can now credibly target high-performance workloads that might have previously required dedicated infrastructure: Epic for healthcare, high-transaction databases, and other mission-critical deployments where sub-millisecond latency isn't optional.

Now that the solution is generally available, it's worth taking a closer look at what this integration actually delivers, what's required to deploy it, and why it matters.

What the Integration Looks Like

At its core, this is Nutanix AHV running on Pure Storage FlashArray over NVMe/TCP. But the integration goes deeper than just connectivity.

Every AOS vDisk maps directly to a FlashArray volume, thin-provisioned and presented dynamically. This means a VM with multiple virtual disks has multiple corresponding FlashArray volumes, enabling per-disk granularity for data services. You can apply snapshots, quality-of-service controls, and replication policies at the individual virtual disk level without impacting other disks on the same VM or neighboring workloads.

The entire solution is managed through Nutanix Prism, providing a unified interface for both Day 1 deployment and Day 2 operations. You're not bouncing between consoles. Prism handles VM-centric operations while FlashArray handles the heavy lifting underneath.

Platform Requirements: HPE and Cisco Only

Here's where things get interesting. Unlike the broad hardware compatibility list you might expect from Nutanix, this integration has a tightly curated set of supported compute platforms.

For the initial release, you're looking at HPE and Cisco servers exclusively:

  • HPE: Gen10+ and Gen11 DL-series rack servers, supporting both Intel (Icelake, Sapphire Rapids, Emerald Rapids) and AMD (Milan, Genoa) processors
  • Cisco: M6 and M7 HCI C-series nodes, Intel processors only

Notably absent is AMD support on the Cisco side. Something to watch for in future releases.

I do find it interesting that the Cisco B200 M5 blade made the compatibility list. These Cascade Lake-based blades are getting long in the tooth; Cisco announced end-of-sale back in 2022. My guess is this was included to support customers with existing blade infrastructure who want to leverage the disaggregated architecture without a complete hardware refresh. Smart move for adoption, though I wouldn't recommend it for new deployments.

Software Stack Requirements:

ComponentVersion
AOS7.5
AHV11.0
Foundation5.10
Foundation Central (Cisco)1.10
Foundation Central (HPE)2.0
Prism Central7.5
Purity//FA6.10.3

Prism Central Implications: This is an important one. You need to deploy a separate Prism Central instance in a highly available configuration to manage Nutanix compute clusters in this setup. Do not use an existing Prism Central instance that already manages other Nutanix hyperconverged infrastructure (HCI) clusters. This is a hard requirement, not a recommendation. Plan your Prism Central deployment accordingly.

Supported Topologies

Before diving into cluster sizing, it's worth understanding the supported topology constraints:

  • A Nutanix Cloud Infrastructure (NCI) compute cluster can connect to only one external storage entity (one Pure Storage FlashArray)
  • However, a single FlashArray can serve multiple NCI compute clusters
  • Pure Storage FlashArray supports data-at-rest encryption for storage consumed by NCI compute clusters

This one-to-one (cluster to array) but one-to-many (array to clusters) relationship is important for capacity planning. If you need to isolate workloads across different FlashArrays, you'll need separate NCI clusters for each. For example, using a single NCI cluster for Server and VDI workloads, but targeting different FlashArrays - nope...

Cluster Sizing Constraints

The sizing requirements reflect the disaggregated nature of this deployment:

  • Production environments: Minimum 5 compute nodes
  • Non-production environments: Minimum 3 compute nodes
  • One and two-node clusters: Explicitly not supported

The 5-node minimum for production is interesting. Traditional Nutanix HCI clusters can run production workloads on 3 nodes, but here the bar is higher. My guess is this relates to the distributed nature of AOS services when running without local storage. With no local SSDs for metadata and caching, you need more compute redundancy to maintain cluster quorum and service availability during node failures or maintenance events. It's a reasonable trade-off for the architecture, but it does increase the entry point for smaller deployments.

Each compute node requires at least 1 TB of provisioned capacity on the FlashArray for Nutanix internal usage. The solution scales up to 32 nodes per cluster with support for up to 5,000 vDisks. Given the one-to-one mapping between vDisks and FlashArray volumes, that's 5,000 volumes per cluster. Keep that in mind when planning VM density.

Network Considerations

NVMe/TCP is the only supported storage protocol. If you were hoping to leverage existing Fibre Channel or iSCSI infrastructure, this isn't the integration for you. Neither FC nor iSCSI connectivity is supported. This keeps the architecture clean and optimized for NVMe performance, but it does mean you need:

  • Minimum 10 GbE end-to-end connectivity (25 GbE or 100 GbE recommended)
  • Jumbo frames (MTU 9000) recommended for optimal performance
  • A dedicated virtual switch (vs1) for storage traffic, separate from management (vs0)
  • LACP on port switches for maximum performance and reliability

The FlashArray side requires specific NIC configurations depending on your hardware generation. FlashArray //X R4 can use onboard 25 GbE NICs, while //X R3 and //XL models need additional NICs for NVMe/TCP support.

Lifecycle Management

One area worth understanding upfront is how updates work in this disaggregated model. The short answer: two separate management planes.

Nutanix Compute Cluster: Use Life Cycle Manager (LCM) to update software and firmware on the compute nodes. LCM handles these updates without downtime, maintaining the one-click upgrade experience Nutanix customers expect.

Pure Storage FlashArray: Pure manages the FlashArray independently, including firmware and software updates. You cannot use Prism Central to manage FlashArray updates. Pure's own tools and support handle that side of the house.

This split makes sense given the architecture, but it does mean you're coordinating maintenance windows and change management across two systems. Not a dealbreaker, but something to factor into your operational planning.

Disaster Recovery: What's Supported (and What's Not)

DR is where you need to pay close attention to the current limitations of this integration. The good news: Nutanix Disaster Recovery works. The caveat: not all of it.

What's Supported

  • Prism Central-based DR with asynchronous replication: This is the path forward. You can protect VMs and replicate them to a recovery site using the familiar Nutanix DR workflows.
  • DR between NCI compute clusters and traditional HCI clusters: You can replicate from a Pure-backed compute cluster to a standard Nutanix HCI cluster (and vice versa), as long as both are running AOS 7.5 or later. This gives you flexibility in how you architect primary and recovery sites.
  • Third-party backup integration: HYCU, Commvault, Veeam, Rubrik, and Cohesity are all supported for backup and restore operations.

What's Not Supported

  • Protection Domains from Prism Element: The legacy PE-based DR approach doesn't work here. You'll need Prism Central.
  • NearSync and Synchronous replication: If you need RPOs measured in seconds rather than minutes, this integration isn't there yet.
  • AHV Metro Availability: No stretch clustering across sites.
  • Cross-hypervisor DR (CHDR): You can't replicate between ESXi and AHV environments with this setup.
  • Multicloud Snapshot Technology (MST): No replicating to cloud object stores or restoring snapshots from them.
  • Instant restore from third-party backups: Traditional restore works; instant mount does not.

On the Pure Storage Side

FlashArray brings its own data protection capabilities to the table. ActiveDR for asynchronous replication is available today, with ActiveCluster (synchronous replication with automatic failover) on the roadmap. SafeMode provides immutable snapshots for ransomware protection, and the per-vDisk volume architecture means you can apply replication policies with precision.

The Bottom Line on DR

If you're planning to use this integration for production workloads, make sure your DR requirements align with what's currently supported. Async replication through Prism Central covers many use cases, but if you need sub-minute RPOs or metro availability, you'll need to wait for future releases or architect around those limitations.

Additional Limitations

Beyond the DR constraints, there are several other limitations worth knowing before you commit to this architecture:

  • No cross-container live migration: You can't live migrate VMs between storage containers.
  • No on-demand cross-cluster live migration (OD-CCLM): Live migration between clusters isn't supported.
  • No Repair Host Boot Disk functionality: The Prism Element option to repair host boot disks (proactive or graceful replacement) isn't available.
  • No NCP layered products: NDB (Nutanix Database Service), NUS (Nutanix Unified Storage), NDK (Nutanix Data Services for Kubernetes), and NKP (Nutanix Kubernetes Platform) are not supported on this architecture.
  • No RDMA or iSER: These protocols aren't supported for external storage service segmentation.
  • No volume overwrites using Pure Storage volume copies: You can't overwrite volumes using FlashArray volume copy operations.

These limitations reflect the current maturity of the integration. I would fully expect some of these to be addressed in future releases as Nutanix and Pure continue to develop the joint solution, and it will be exciting to see what else comes from this partnership.

Why This Matters

Performance density: FlashArray's disaggregated NVMe architecture delivers sub-millisecond latency across all workloads. DirectFlash Modules scale up to 150TB per drive, with 300TB modules on the roadmap. For data-intensive workloads, including AI deployments, this kind of performance density is hard to match.

Six-nines availability: FlashArray delivers 99.9999% uptime, even during non-disruptive, in-place upgrades. Combined with Pure's Evergreen model (no re-licensing or equipment replacement during upgrades), this addresses one of the biggest pain points in enterprise storage.

Built-in resilience: The integration leverages Nutanix Flow for micro-segmentation and disaster recovery orchestration alongside FlashArray capabilities like data-at-rest encryption and SafeMode ransomware protection. The per-vDisk volume architecture means snapshots and replication can be applied with surgical precision. You could protect a database's data disk differently than its log disk, for example.

Independent scaling: Because compute and storage are cleanly separated, you can scale each tier independently based on your workload requirements. Need more compute? Add AHV nodes. Need more storage capacity or performance? Scale FlashArray. No more buying storage you don't need just to get more CPU.

The Ecosystem

This isn't just Nutanix and Pure going it alone. FlashArray is supported by server hardware partners including Cisco, Dell, HPE, Lenovo, and Supermicro. That said, the Nutanix integration currently validates only HPE and Cisco platforms. Cisco's FlashStack integration brings validated designs, a joint support model, and deep integration with Cisco Intersight for unified visibility across both Pure Storage and Nutanix clusters.

The solution can scale to ten storage arrays and 20 controllers, providing plenty of headroom for growth.

Who Should Care

If you're running business-critical, data-intensive workloads and need predictable performance with enterprise-grade resilience, this integration deserves attention. Healthcare organizations running Epic ODB systems are an obvious fit, but the use cases extend to any environment where latency and availability are non-negotiable.

For organizations evaluating VMware alternatives, this provides another option in the Nutanix ecosystem. You get the familiar Nutanix operational model with the flexibility to choose your storage architecture based on workload requirements.

The Bottom Line

The Nutanix and Pure Storage partnership represents a shift in how we think about hyperconverged and disaggregated infrastructure. Rather than treating them as competing philosophies, this integration acknowledges that different workloads have different requirements.

Sometimes you want the simplicity of HCI with local storage. Sometimes you need the performance and scalability of enterprise flash arrays. Now you can have both under a single management plane.

That's a meaningful choice customers didn't have before.


For more details, see the official announcements from Nutanix and Pure Storage.


Have thoughts on the Nutanix and Pure Storage integration? I'd love to hear from you. Connect with me on LinkedIn or drop me a note at mike@mikedent.io.