NextIO teams with Fusion-io for 5TB flash SAN
1.7 million IOPS box on the way too
NextIO has inked a deal with Fusion-io to produce a 5TB flash SAN to share flash storage between connected servers.
NextIO started out as a producer of switches that virtualise PCIE connections and represent a bunch of servers with access to Fibre Channel, Ethernet, InfinBand etc through a single switched PCIe box. It then produced a shared flash array using a 3U chassis and Texas Memory Systems SLC (Single-level cell) RamSan 10 and 20 PCIe-connected flash cards. This was last November.
Now it's done the same with Fusion-io to create a 5Tb Box on the one hand and 1.7 million IOPS box on the other. NextIO has announced a vSTOR S100 series of products. These come in two varieties, an E class and an S class - a bit like Mercedes cars, each with five models.
The E-class models are the E09, E18, E40, E45 and E50, with the numbers being the capacity in terabytes, such that 09 equals 900GB and 18 means 1.8TB, and so on through 4TB, 4.5TB and 5TB. The S class models are S13, S26, S58, S64 and S70 with the S13 offering 1.3TB and the S70 topping out at 7TB.
The company doesn't say but it looks as if the S100-E50 is the 5TB Fusion-io product whilst the S100-E40 - meaning 4TB maximum capacity, is the 1.7 million IOPS speedster.
It's a simple assumption that that the E class use Fusion-io PCIE flash cards while the S-class use Texas Memory Systems PCIe flash. NextIO doesn't say, and the two suppliers' flash could appear in both clases of products, possibly mixed and matched in one product.
The product will compete with the PCIe-connected version of Violin Memory's 1010 Memory Appliance. Looking at the sequential read and write rates something odd happens as the capacity of a class increases. First it goes up but then it goes down.
The E class models are as follows, with the sequential read rate numbers in brackets: E09 (2.2GB/sec), E18 (4.4GB/sec), E40 (5.5GB/sec), E45 (4.4GB/sec) and E50 (3.3GB/sec). Our guess is that the higher capacity models, with slower read rates than lower capacity models, use two-bit multi-level cell (MLC) flash whereas the uses use the faster single-level cell (SLC) flash.
This logic would explain why the number of supported hosts exhibits the same pattern: E09 (1-2), E18 (1-4), E40 (1-5), E45 (1-4) and E50 (1-3). The slower MLC models simply can't support as many attached servers.
The same pattern is seen with the S class models.
The flash cards in the vSTOR can be dynamically reallocated between servers by NextIO's vConnect PCIe switching technology. NextIO says its approach of using PCIe-format flash cards prevents flash card supplier lock-in. It also says that its flash SAN (our term; NextIO is not using it) can prevent over-provisioning of servers and over-provisioning of smaller flash tier 0 storage in storage arrays.
Fusion-io used to have ioSAN technology, based on a vanilla server fitted with a set of ioDrive or ioDuo cards and connected over Ethernet or some other link to a group of servers. With NextIO now OEM'ing Fusion's cards to produce a near-enough equivalent product the ioSAN seems dead in the water and unlikely to ever see the light of day.
We can imagine that future versions of the product will double capacity and perhaps go further, with three-bit MLC chips decreasing the cost per terabyte quite substantially.
NextIO reckons ideal application areas for the vSTOR products are Database, Microsoft Exchange, High Performance Computing, game hosting and video on demand. The only pricing info that we have is that the entry-level vSTOR with TMS flash starts at $19,500. This might be the E09, which with 900GB of capacity is a perfect match for two 450GB RamSan 20 cards.
NextIO wasn't available to answer questions when we prepared this story. ®