Feeds

Flash array start-up SolidFire begins the hard sell

'Virtualising performance'... with IOPS to spare

Next gen security for virtualised datacentres

Flash array startup SolidFire says it virtualises performance. What does it mean?

At a press briefing in Arista's offices (yes, Arista), we learnt that SolidFire offers an all-flash array with in-line deduplication, compression and thin provisioning to effectively increase the capacity and lower the cost/GB of its product versus traditional SAN arrays.

Marketing veep Jay Prassl says SolidFire offers SAN block access at cloud scale, and guarantees storage IOPS performance to thousands of volumes in one infrastructure. You can dial performance up or down separately from capacity, and specify three IOPS numbers per application: minimum, maximum and burst mode.

The maximum IOPS number is what the customer is delivered – say 500 – an IOPS being any read or write array access. The minimum is how low the IOPS rate can go if the array is very busy, say 100, and getting overloaded. For the time spent below its maximum number the customer builds up credits which can be "spent" to pay for burst IOPS, the level above, logically above, the maximum level, it not being, literally, the maximum level. A better term would be standard level.

These performance SLAs (Service Level Agreements) are based on SolidFire using Intel SSDs. A look at these reveals an apparent performance gap, with IOPS left on the table.

A SolidFire node has ten 300GB SSDs. A quick look at reference material suggests this is an Intel 320, which Prassl confirms. An Intel 320 can do 23,000 write IOPS and 39,500 read IOPS, so 10 of them should do between 230,000 and 395,000 IOPS. But a SolidFire node only does 50,000 IOPS. Why? It's as if 190,000 to 345,000 IOPS have gone missing. Why is that?

CEO Dave Wright said: "We're running mixed read/write workloads [and] our replication means the write IOPS are doubled. We set aside an IOPS allowance for rebuilds in background plus other internal stuff. So the 50,000 IOPS is the delivered IOPS to customers." In other words, the SolidFire SSDs are running faster than the 50,000 IOPS made available for customers' I/O, and this is a conservative number anyway, meaning SolidFire has performance headroom, which is reassuring.

SolidFire offers guaranteed IOPS levels, something that can be readily metered and billed for by cloud service providers, and so the array is instrumented in like a car's speedometer; pretty nifty.

It seems to us that there is an opportunity for SolidFIre to use faster single level cell flash if it wanted to push the performance envelope higher. But it gets cost advantages from using multi-level cell NAND. It tries to sequentialise writes and so reduce write amplification and extend flash' working life. Prassl confirmed that SolidFIre engineers are looking with interest at 3-bit multi-level cell flash which, if the working life was satisfactory, would enable them to raise capacity and/or to lower cost.

He pointed out that SolidFire deduplication is global, working across all volumes, whereas "NetApp ASIS only dedupes on a per-volume basis in an array and not across volumes in an array." Alex McDonald from NetApp's office of the CTO confirmed this but said NetApp can have many, many LUNS in a volume.

Prassl wouldn't supply cost/GB numbers for SolidFire but said its prices would be the same as or less than traditional SAN arrays from mainstream SAN vendors like EMC.

He said remote replication was likely coming in Q3 of 2012. General availability of SolidFire is scheduled for the second 2012 quarter, with the product currently being tested in an early access program. There is some 500TB of capacity under evaluation in this program.

SolidFire is focused pretty exclusively on cloud service providers and has good, capable software for them. TMS, Violin, and others could perhaps blow them out of the water in performance/node terms but they have no Cloud service provider-focused software, and that, Prassl said, is crucial for SolidFire's customers.

It seems to El Reg that SolidFire could possibly store bulk (nearline-ish) data on disk and tier it to flash. If the write levels get too high then SSD quality could be upgraded to enterprise-grade MLC. But this is a start-up close to product GA, and it is focusing like a laser on its markets and getting reliable and robust product out. Extending its capabilities is for the future. We speculate by the way, that SolidFire and Arista will co-operate to offer an Arista low-latency switch and SolidFire array bundle to cloud service providers. ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
Like condoms, data now comes in big and HUGE sizes
Linux Foundation lights a fire under storage devs with new conference
Community chest: Storage firms need to pay open-source debts
Samba implementation? Time to get some devs on the job
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up Big Data
Solving backup challenges and “protect everything from everywhere,” as we move into the era of big data management and the adoption of BYOD.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?