Feeds

Flash array start-up SolidFire begins the hard sell

'Virtualising performance'... with IOPS to spare

Protecting against web application threats using SSL

Flash array startup SolidFire says it virtualises performance. What does it mean?

At a press briefing in Arista's offices (yes, Arista), we learnt that SolidFire offers an all-flash array with in-line deduplication, compression and thin provisioning to effectively increase the capacity and lower the cost/GB of its product versus traditional SAN arrays.

Marketing veep Jay Prassl says SolidFire offers SAN block access at cloud scale, and guarantees storage IOPS performance to thousands of volumes in one infrastructure. You can dial performance up or down separately from capacity, and specify three IOPS numbers per application: minimum, maximum and burst mode.

The maximum IOPS number is what the customer is delivered – say 500 – an IOPS being any read or write array access. The minimum is how low the IOPS rate can go if the array is very busy, say 100, and getting overloaded. For the time spent below its maximum number the customer builds up credits which can be "spent" to pay for burst IOPS, the level above, logically above, the maximum level, it not being, literally, the maximum level. A better term would be standard level.

These performance SLAs (Service Level Agreements) are based on SolidFire using Intel SSDs. A look at these reveals an apparent performance gap, with IOPS left on the table.

A SolidFire node has ten 300GB SSDs. A quick look at reference material suggests this is an Intel 320, which Prassl confirms. An Intel 320 can do 23,000 write IOPS and 39,500 read IOPS, so 10 of them should do between 230,000 and 395,000 IOPS. But a SolidFire node only does 50,000 IOPS. Why? It's as if 190,000 to 345,000 IOPS have gone missing. Why is that?

CEO Dave Wright said: "We're running mixed read/write workloads [and] our replication means the write IOPS are doubled. We set aside an IOPS allowance for rebuilds in background plus other internal stuff. So the 50,000 IOPS is the delivered IOPS to customers." In other words, the SolidFire SSDs are running faster than the 50,000 IOPS made available for customers' I/O, and this is a conservative number anyway, meaning SolidFire has performance headroom, which is reassuring.

SolidFire offers guaranteed IOPS levels, something that can be readily metered and billed for by cloud service providers, and so the array is instrumented in like a car's speedometer; pretty nifty.

It seems to us that there is an opportunity for SolidFIre to use faster single level cell flash if it wanted to push the performance envelope higher. But it gets cost advantages from using multi-level cell NAND. It tries to sequentialise writes and so reduce write amplification and extend flash' working life. Prassl confirmed that SolidFIre engineers are looking with interest at 3-bit multi-level cell flash which, if the working life was satisfactory, would enable them to raise capacity and/or to lower cost.

He pointed out that SolidFire deduplication is global, working across all volumes, whereas "NetApp ASIS only dedupes on a per-volume basis in an array and not across volumes in an array." Alex McDonald from NetApp's office of the CTO confirmed this but said NetApp can have many, many LUNS in a volume.

Prassl wouldn't supply cost/GB numbers for SolidFire but said its prices would be the same as or less than traditional SAN arrays from mainstream SAN vendors like EMC.

He said remote replication was likely coming in Q3 of 2012. General availability of SolidFire is scheduled for the second 2012 quarter, with the product currently being tested in an early access program. There is some 500TB of capacity under evaluation in this program.

SolidFire is focused pretty exclusively on cloud service providers and has good, capable software for them. TMS, Violin, and others could perhaps blow them out of the water in performance/node terms but they have no Cloud service provider-focused software, and that, Prassl said, is crucial for SolidFire's customers.

It seems to El Reg that SolidFire could possibly store bulk (nearline-ish) data on disk and tier it to flash. If the write levels get too high then SSD quality could be upgraded to enterprise-grade MLC. But this is a start-up close to product GA, and it is focusing like a laser on its markets and getting reliable and robust product out. Extending its capabilities is for the future. We speculate by the way, that SolidFire and Arista will co-operate to offer an Arista low-latency switch and SolidFire array bundle to cloud service providers. ®

Choosing a cloud hosting partner with confidence

More from The Register

next story
Wanna keep your data for 1,000 YEARS? No? Hard luck, HDS wants you to anyway
Combine Blu-ray and M-DISC and you get this monster
Google+ GOING, GOING ... ? Newbie Gmailers no longer forced into mandatory ID slurp
Mountain View distances itself from lame 'network thingy'
US boffins demo 'twisted radio' mux
OAM takes wireless signals to 32 Gbps
Apple flops out 2FA for iCloud in bid to stop future nude selfie leaks
Millions of 4chan users howl with laughter as Cupertino slams stable door
Students playing with impressive racks? Yes, it's cluster comp time
The most comprehensive coverage the world has ever seen. Ever
Run little spreadsheet, run! IBM's Watson is coming to gobble you up
Big Blue's big super's big appetite for big data in big clouds for big analytics
Seagate's triple-headed Cerberus could SAVE the DISK WORLD
... and possibly bring us even more HAMR time. Yay!
prev story

Whitepapers

Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
The next step in data security
With recent increased privacy concerns and computers becoming more powerful, the chance of hackers being able to crack smaller-sized RSA keys increases.