Feeds

Flash array start-up SolidFire begins the hard sell

'Virtualising performance'... with IOPS to spare

Build a business case: developing custom apps

Flash array startup SolidFire says it virtualises performance. What does it mean?

At a press briefing in Arista's offices (yes, Arista), we learnt that SolidFire offers an all-flash array with in-line deduplication, compression and thin provisioning to effectively increase the capacity and lower the cost/GB of its product versus traditional SAN arrays.

Marketing veep Jay Prassl says SolidFire offers SAN block access at cloud scale, and guarantees storage IOPS performance to thousands of volumes in one infrastructure. You can dial performance up or down separately from capacity, and specify three IOPS numbers per application: minimum, maximum and burst mode.

The maximum IOPS number is what the customer is delivered – say 500 – an IOPS being any read or write array access. The minimum is how low the IOPS rate can go if the array is very busy, say 100, and getting overloaded. For the time spent below its maximum number the customer builds up credits which can be "spent" to pay for burst IOPS, the level above, logically above, the maximum level, it not being, literally, the maximum level. A better term would be standard level.

These performance SLAs (Service Level Agreements) are based on SolidFire using Intel SSDs. A look at these reveals an apparent performance gap, with IOPS left on the table.

A SolidFire node has ten 300GB SSDs. A quick look at reference material suggests this is an Intel 320, which Prassl confirms. An Intel 320 can do 23,000 write IOPS and 39,500 read IOPS, so 10 of them should do between 230,000 and 395,000 IOPS. But a SolidFire node only does 50,000 IOPS. Why? It's as if 190,000 to 345,000 IOPS have gone missing. Why is that?

CEO Dave Wright said: "We're running mixed read/write workloads [and] our replication means the write IOPS are doubled. We set aside an IOPS allowance for rebuilds in background plus other internal stuff. So the 50,000 IOPS is the delivered IOPS to customers." In other words, the SolidFire SSDs are running faster than the 50,000 IOPS made available for customers' I/O, and this is a conservative number anyway, meaning SolidFire has performance headroom, which is reassuring.

SolidFire offers guaranteed IOPS levels, something that can be readily metered and billed for by cloud service providers, and so the array is instrumented in like a car's speedometer; pretty nifty.

It seems to us that there is an opportunity for SolidFIre to use faster single level cell flash if it wanted to push the performance envelope higher. But it gets cost advantages from using multi-level cell NAND. It tries to sequentialise writes and so reduce write amplification and extend flash' working life. Prassl confirmed that SolidFIre engineers are looking with interest at 3-bit multi-level cell flash which, if the working life was satisfactory, would enable them to raise capacity and/or to lower cost.

He pointed out that SolidFire deduplication is global, working across all volumes, whereas "NetApp ASIS only dedupes on a per-volume basis in an array and not across volumes in an array." Alex McDonald from NetApp's office of the CTO confirmed this but said NetApp can have many, many LUNS in a volume.

Prassl wouldn't supply cost/GB numbers for SolidFire but said its prices would be the same as or less than traditional SAN arrays from mainstream SAN vendors like EMC.

He said remote replication was likely coming in Q3 of 2012. General availability of SolidFire is scheduled for the second 2012 quarter, with the product currently being tested in an early access program. There is some 500TB of capacity under evaluation in this program.

SolidFire is focused pretty exclusively on cloud service providers and has good, capable software for them. TMS, Violin, and others could perhaps blow them out of the water in performance/node terms but they have no Cloud service provider-focused software, and that, Prassl said, is crucial for SolidFire's customers.

It seems to El Reg that SolidFire could possibly store bulk (nearline-ish) data on disk and tier it to flash. If the write levels get too high then SSD quality could be upgraded to enterprise-grade MLC. But this is a start-up close to product GA, and it is focusing like a laser on its markets and getting reliable and robust product out. Extending its capabilities is for the future. We speculate by the way, that SolidFire and Arista will co-operate to offer an Arista low-latency switch and SolidFire array bundle to cloud service providers. ®

Boost IT visibility and business value

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
Cutting cancer rates: Data, models and a happy ending?
How surgery might be making cancer prognoses worse
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
VMware's high-wire balancing act: EVO might drag us ALL down
Get it right, EMC, or there'll be STORAGE CIVIL WAR. Mark my words
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Endpoint data privacy in the cloud is easier than you think
Innovations in encryption and storage resolve issues of data privacy and key requirements for companies to look for in a solution.
Scale data protection with your virtual environment
To scale at the rate of virtualization growth, data protection solutions need to adopt new capabilities and simplify current features.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?