Feeds

Clustering SSD arrays for the Cloud

SolidFire's cloudy take on memory arrays

Next gen security for virtualised datacentres

SolidFire has announced clustered solid state drive memory arrays for the cloud, which minimise cost/GB with thin provisioning, compression and deduplication.

It's a straightforward enough clustering of storage arrays, but what is not straightforward is the use of expensive NAND SSDs instead of disk drives, and then optimising their cost/GB by offering always-on thin provisioning – allocating SSD capacity only as it is needed for written dates and not allocated in larger upfront and unused chunks – compression and in-line deduplication.

SolidFire rack of SF3010 nodes

A rackful of SolidFire SF3010 nodes

These three things are collectively possible because of NAND speed compared to disk speed. They are also helped by the controller of the 1U SF3010 node having two 6-core Xeon CPUs. The Element O/S running on it looks after ten 300GB SSDs.

That provides 3TB of raw flash space which SolidFire says is 12TB of effective usable space because of the provisioning thinness, compression and dedupe, plus deduped clones and snapshots.

There can be from three to 100 nodes in a cluster, supporting up to 100,000 iSCSI LUNS.

A maximum configured cluster would have 300TB raw and 1.2PB effective usable capacity. Individual customer's effective capacity could well differ from the 4:1 multiplier assumed by SolidFire.

A maximum of 1.2PB doesn't sound much for cloud-scale storage, and the pricing, compared to 1.2PB effective capacity of a disk storage, will be of great interest to potential customers. Also customers may well like the idea of larger clusters in the future and more individual node capacity. They'd also like cluster-to-cluster protection facilities

Accessing clients get their data across 10GbitE links and using a RESTful API. There are quality-of-service SLAs possible, and a node's data contents can be replicated to another for data protection.

Are the SSDs using fast single level cell (SLC) or slower but cheaper 2-bit multi-level cell (MLC) flash? We'd suppose there has to be MLC SSDs in there to keep the cost manageable.

That means that when 3-bit MLC comes along next year there could be an up to 50 per cent node capacity jump, assuming 2-bit MLC is used currently.

What we have here is a sixth all-flash, network-access, memory array supplier, alongside Nimbus, Solid Access, Texas Memory Systems, Violin Memory and Whiptail. Solid Access is a flash-based filer. Nimbus, TMS and Violin use flash cards while Whiptail and SolidFire use SSDs.

SolidFire pricing has not been revealed and availability could be by the end of the year or early in 2012. Potential customers can sign up for an early access program. ®

Build a business case: developing custom apps

Whitepapers

Best practices for enterprise data
Discussing how technology providers have innovated in order to solve new challenges, creating a new framework for enterprise data.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Advanced data protection for your virtualized environments
Find a natural fit for optimizing protection for the often resource-constrained data protection process found in virtual environments.
How modern custom applications can spur business growth
Learn how to create, deploy and manage custom applications without consuming or expanding the need for scarce, expensive IT resources.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?