Pillar embraces Intel SSDs
Chipzilla's June debut
Marking Intel's first appearance in the enterprise storage array solid state disk (SSD) market, Pillar Data is launching SSD enclosures for its Axiom arrays in June.
Axiom is Pillar's array line with different levels of storage service available from a application-aware quality of service (QoS) system. The arrays use storage bricks as drive enclosures. June will see the arrival of an SSD brick with 12 64GB single level cell Intel X25-E SATA-interface SSDs, providing 768GB of capacity, accessed through dual RAID controllers. Axiom 600, 600MC and 500 arrays can have up to four of these SSD bricks per controller, for a total of 3.072TB.
Previously STEC SSDs have been used for storage arrays - from EMC and HDS, for example. Pillar CEO Mike Workman says STEC's SSD technology: "is good stuff (but) it's expensive and Pillar wants to maximise cost-effectiveness per dollar." He says the advantage of SSDs is not in sheer bandwidth but in microsecond-class latency. The millisecond latency and bandwidth of SATA spindles is fine and more cost-effective for video-streaming. You would reserve SSDs for applications needing blistering latency.
There are five bands in the Pillar QoS system, with SSDs occupying the premium band and levels of disk technology in the high, medium, low and archive/WORM bands. The QoS guarantees high priority filesystems and LUNs get SSD IO service from the shared Axiom storage pool. It also ensures that low priority I/O requests cannot "steal" SSD resources from the system. None of this required any change to the Axiom array's architecture and application-aware storage templates will be able to accommodate the SSD bricks.
Read IOPs per SSD brick are 16 times faster than a SATA brick and 5 times faster than a Fibre Channel drive brick. Write IOPs are 12 times faster than a SATA brick and 4 times faster than a FC brick.
According to Pillar, small blocks and random read applications/operations benefit most from an SSD power infusion, with examples being:
- Internet Retail: “List all of the DVDs starring Al Pacino”
- Search Engine: “Show me all references to Mike Workman’s blog”
- Business Intelligence: “List all customers in 94070 zip code”
- Database indexing operations.
Workman says he agrees in theory with people who say you can solve most latency and IOPS limitations by throwing disk spindles at them and striping across all spindles - as 3PAR does, for example. But this is a preposterous idea in many instances. Data centres often don't have the space required for hundreds of disk spindles, and it makes more sense to add an SSD performance tier to an array to provide the low latency needed by specific applications far more space-efficiently.
There is a power argument in favour of SSDs too. Comparing striping across a hundred drives versus a single shelf of SSDs, Workman said: "I think our solution is about 20 times more power efficient than using disk."
According to Pillar's world-wide marketing VP, Bob Maness, the total cost of ownership of an SSD brick can be less than that of four Fibre Channel disk bricks over time.
Pillar thinks it can drive down the dollars per IOP of SSDs to half that of traditional Fibre Channel drives. It also cites an 85 per cent reduction in power and cooling costs versus Fibre Channel drives and reckons that an SSD has a 42 per cent better mean time before failure rating than a 450GB Fibre Channel drive.
Workman said that SSD bricks were the first iteration of SSD technology use in Pillar and that level 2 cache will probably follow, the idea being that you apply SSD technology wherever it best solves cost per IOPS problems.
Beta testing of Pillar's SSD bricks will run from April through May with general availability in June. Pricing will be announced then. ®
Indeed the STEC will greatly outperform the Intel, but there are a couple of important points. Firstly, there is the not insignificant one on costs - it's debatable just how many real workloads actually require the number of IOPS that the STEC can can handle. However, there's also an important technical point. There's a big difference between putting an SSD into a server as a stand-alone drive, and putting it into a storage array. In the first instance, the storage array is very likely to be incapable of supporting the total number of IOPs that everal STECs (you simply runn out of the array's internal capacity). That would make all those theoretical IOPs unusable.
The second point, is that an array with appropriate software can mask some of the limitations of lower cost SSDs. The most obvious one is "roll-up write optimisation" where the array stores up multiple writes in NV Ram for staging out. Doing this properly is complex, but achievable.
So the choice of Intel vs STEC within an array could well be a good choice on price/performance grounds. Put the higher performance SSD into an array and you well not see anything like the full benefit anyway. Put a STEC drive into a serrver, and you are faced with the limiation of limited connectibility.
Latency and random IO is the key
He's right, it's the random IO rate and latency that are the key benefits of SSDs. Bandwidth is easy, you can always buy more width. IO rates you can increase by buying more mechs. But no amount of purchasing mechanical disk drives is ever going to effect the latency times for data access. Apart from massively over provisioning storage so that you only make tiny head movements. But even if you get to the point where you have so many disk drives that you are only using 1 track on each of them you still have a 2ms average latency time on a 15K rpm disk. Using twin head towers you could get down to 1ms.
With SSDs you are in a totally different ball park.
Smaller and smaller steps...
Greater and greater leaps...
And well-packed. Now we're getting somewhere...
(Paris, cos she's well-packed and likes embracing things, although she'd rather be coming somewhere...)