Original URL: https://www.theregister.com/2012/05/30/nutanix/

New compute-'n'-storage cluster-box like 'iPhone for the data centre'

Nutanix promises SAN-free storage

By Chris Mellor

Posted in Storage, 30th May 2012 16:59 GMT

Nutanix's Complete Cluster product collapses separate compute and storage into a single hybrid flash-and-disk box that can scale out to a cluster of a 1,000 plus nodes. Nutanix says it is SAN storage without the SAN and NFS without the "N". We say it's a compute+storage cluster-box aimed squarely at the heart of every converged systems storage vendor.

Let's step down from Planet Hype for a moment and say the box is pretty much as El Reg reported back in October last year. Is this a server or a storage array? It's both, being a set of servers with directly-attached storage which is virtualised into a sharable pools. Our interest here is in the storage side.

The enclosure or Block is a 2U unit containing four Nutanix nodes inside it, with three being that minimum starting point. From that you add nodes in ones and twos, or however many more, buying new enclosures as you require them. Each node has two or eight X86 sockets, populated with Xeon 5650 processors, meaning 6 or 48 cores, and up to 768GB of RAM, 1.3TB of Fusion-io ioDrive PCIe flash cards, and a mid-layer of up to 1.2TB of Intel SSD storage.

We understand that the four nodes share a bottom tier of 20TB's worth of SATA disk drives, with data automatically tiered inside the node. Hot data is stored in the Fusion-io cards and automatically replicated across the cluster into other Fusion-io cards. There can be networked storage arrays behind each node or nodes. There are up to four 10GbitE ports and eight 1GbitE ones.

The processors run ESXi and storage control is a system-level application along with the end-user apps running inside virtual machines (VMs). In other words each node is a virtual storage appliance.

At a briefing, CEO Dheeraj Pandey said the VSC "uses PCIe pass-through direct from Fusion-ion and into the VM, and avoids going through the ESX kernel." It is not what Fusion-io calls cut-through, as: "we use POSIX APIs and not the cut-through APIs from Fusion-io."

Nutanix says: "Local storage from all nodes is virtualised into a unified pool by Nutanix Scale-out Converged Storage (SOCS). A VM can write data anywhere in the cluster and is not limited by the storage local to the node where it is running. In effect, SOCS acts like an advanced SAN that uses local SSDs and disks from all the nodes to store its data."

Pandey says: "We basically have a single data store for the whole cluster. It's hugely easier to manage. There is no concept of a LUN or fan-out structures; all storage is in the pool."

SOCS provides storage to the VMs in the form of vDisks – virtual disks – which can span one or several cluster nodes. Each vDisk is mounted locally on a VM and as a VM is vMotioned from node to node, its cDisk moves with it.

The design is a shared-nothing cluster in which none of the nodes share memory or storage and act as independent server/storage entities. Such a cluster can scale almost infinitely by just adding nodes. In a shared-nothing database implementation, the database is partitioned or sharded – with each shard being stored and processed on a different node.

Ordinary clusters, such as Isilon and 3PAR ones, have the nodes sharing storage and memory, with cache coherency and a need for more and more organisation to co-ordinate the cluster as the node count rises. Pandey said the EMC XTremio technology will only scale to an 8-node cluster.

He said the Complete Cluster product "gives Fusion-io all the enterprise features it's missing." But he also said: "The Intel Ramsdale PCIe SSD is coming. We don't want to be beholden to any one PCIe vendor."

So, it seems Nutanix is bypassing host I/O processing but not using Fusion-io SW to do so. It might be that incoming PCIe flash card vendors, such as Intel with its huge market heft, could start negating Fusion-io's first mover advantage.

Pandey added: "We don't tie ourselves to the VMware architecture, exposing a standard NFS and iSCSI target, with a virtual switch in the hypervisor. We'll support other hypervisors, Hyper-V first, [as a] good-enough solution for the mid-market in the next 12 to 18 months."

The cost per VM instance is $466 and that does not compare to costs from GreenBytes ($12) or Tintri ($23) because the $466 includes compute and data management as well as storage. The data management includes Flex Clones which "does 1,000:1 compression for VDI images. It's our own deduplication but it's not exposed to the end-user."

What about compression? "We have it on our roadmap and will deliver it by the end of 2012."

The company is announcing an EMEA channel structure. SDG has been signed as the first distributor and Kelway is its first enterprise VAR. Alan Campbell has been appointed as the EMEA director with Rob Tribe as his pre-sales manager. It says it started shipping product in November last year and has 30 to 35 customers with about 100 blocks in use. One customer has 10 blocks

What we have here is a converged system cluster node appliance, an "iPhone for the datacentre" as Pandey put it, with the storage side being a hybrid flash and disk architecture presented as a virtual storage and compute appliance.

Where Tintri and Tegile and others have been founded to develop a hybrid flash and disk storage appliance, Nutanix has gone one better by adding compute and clustering into the mix and taking on every server and storage vendor punting their offerings into the virtual server and desktop markets. It's quite ambitious to say the least but customers seem to like it, buying block after block.

Is this the future of storage in a virtual server/desktop world; virtualised flash and trash storage in a vast shared-nothing cluster design? If it eases virtual server and CDI storage pain better than anything else it might well be just that. ®