More like this

Data Center



Fusion-io touts cheap-as-chips flash to Apple, Facebook and chums

Want a retail discount? You'll need to buy in bulk

Fusion-io ioSgale 3.2TB card

Fusion-io is pushing the idea of all-flash servers for HPC and other large scale data centre applications that need a 100 or more commodity servers, and has made its 1,000+ server hyperscale flash card technology available for down market.

These ioScale PCIe cards scale from 410GB, doubling through 825GB, 1.65TB on to 3.2TB and use Fusion's workstation ioFX card technology in a much simplified way. But you'll need to buy at least 100 units at a time. 100 of the largest 3.2TB drives will cost you $3.89 per gigabyte, with discounts on volume pricing, but once the 2Ynm NAND versions are out, this price could halve.

The cards have a single controller and use commercial grade 2Xnm (29-25nm) 2-bit multi-level cell (MLC) flash, not enterprise grade (eMLC). The controller includes FlashBack protection and self-healing functions to cope with problems and avoid a service engineer's attention.

Fusion-io 1.6TB ioScale card

1.6TB half-height, half-length ioScale card

The base performance numbers are in the table below.

ioScale performance

ioScale performance numbers

What stands out is that the random write performance is much better than the random read performance, a reversal of the usual pattern. A lot of writing is going on in the applications envisaged for this mighty little sucker.

Hyperscale idea

Fusion-io chairman, CEO and co-founder David Flynn expounded on the hyperscale idea, saying that OEM partners like Cisco, Dell, HP and NetApp and tier 2 OEMs like SuperMicro have been selling Fusion-io flash cards into the enterprise market: "We have seen these these OEM products bleed over into the workstation market and introduced the ioFX."

Simultaneously Fusion has been working the hyperscale web customer market with operators of humungous data centres like those of Facebook and Apple. For these guys, with thousands of standardised servers, the service unit is the server; if it fails it's just switched off. If a rack of them fail, the rack gets switched off. They want as much simplicity, robustness and reliable performance as possible, and want to stop service engineers from poking about in their data centres mending server components. The refresh cycles are two to three years so it's not as if the servers have to last the five-years-or-so you'd need in a enterprise data centres.

The customers here want Fusion-io (FIO) kit but don't want to wait for OEM certification cycles; so FIO has been supplying them directly.

Fusion sees this kind of computing spreading downmarket, from the 1,000+ unit hyperscale to the 100+ unit web scale, and Fusion has built the ioScale for it, basing it on simplified ioFX technology. THe OEM supply model can be used but with the OEMs reselling the ioScale and not delaying matters by going through their usual certification cycle; this is a different go-to-market model.

Fusion-io ioScale 3.2TB card

Fusion-io 3.2TB full-height, half-length ioScale card. Click on image for bigger pic.

Back in the hyperscale market, Flynn sees it moving to using all-flash servers and ejecting spinning rust because disks are far too slow and unreliable. Get rid of disk and data centres don't have such tight humidity and temperature limits and the overall costs go down, even though a TB of flash costs more than a TB of disk.

Flynn said: "With ioScale we'll make it easier for other folks to move to hyperscale-type computing and go all-flash."


What about the competition? Flynn, talking only about SSDs, says, that to match a 3.2TB ioScale card and its performance they'd need eight SSDS and aRAID controller, meaning nine potential failure points versus the single ioScale failure point. He says that, in this hyperscale and quasi hyperscale-type computing: "Customers don't want to service anything... the server is the service unit." One customer found that the RAID controller in such configurations failed more often than the ioScale card.

Also individual SSDs periodically slow down while doing internal garbage collection and allied processes. Eight of them mean that such latency blips can be common whereas with ioScale - you guessed it; latency is consistent.

Flynn hammers on this SSD nail again and again; ioScale has 10 times the endurance of SSDs, but no numbers are supplied; four times the performance of the SSDs, one ninth of the failure susceptibility and consistent latency. SSDs? Pah!

We'll have to see how other PCIe flash card vendors, like LSI, Micron, OCZ, STEC and many others respond to ioScale. They'll probably bring out bulked up high capacity cards of their own.

NAND process shrink effects

At some undetermined point in the future Flynn said Fusion-IO will start using 2Ynm flash, 24-20nm NAND, which will make denser and cheaper g;ash storage possible. He's anticipating a doubling of the ioScale card's capacity up to 6.4TB. Have four such cards, two per server, in 1.5U of rack space and you'll have a 24TB flash store.

He anticipates a 6.4TB ioScale card using the 2Ynm process flash would have half the per terabyte cost of the just announced ioScale, which happens to be $3.89/GB - with volume sales getting discounted pricing. And then there's TLC flash waiting in the wings from 2014 onwards. It's just going to get better - or much, much worse if you're a performance disk manufacturer. ®

Sponsored: Go beyond APM with real-time IT operations analytics