Feeds

Clustering SSD arrays for the Cloud

SolidFire's cloudy take on memory arrays

Remote control for virtualized desktops

SolidFire has announced clustered solid state drive memory arrays for the cloud, which minimise cost/GB with thin provisioning, compression and deduplication.

It's a straightforward enough clustering of storage arrays, but what is not straightforward is the use of expensive NAND SSDs instead of disk drives, and then optimising their cost/GB by offering always-on thin provisioning – allocating SSD capacity only as it is needed for written dates and not allocated in larger upfront and unused chunks – compression and in-line deduplication.

SolidFire rack of SF3010 nodes

A rackful of SolidFire SF3010 nodes

These three things are collectively possible because of NAND speed compared to disk speed. They are also helped by the controller of the 1U SF3010 node having two 6-core Xeon CPUs. The Element O/S running on it looks after ten 300GB SSDs.

That provides 3TB of raw flash space which SolidFire says is 12TB of effective usable space because of the provisioning thinness, compression and dedupe, plus deduped clones and snapshots.

There can be from three to 100 nodes in a cluster, supporting up to 100,000 iSCSI LUNS.

A maximum configured cluster would have 300TB raw and 1.2PB effective usable capacity. Individual customer's effective capacity could well differ from the 4:1 multiplier assumed by SolidFire.

A maximum of 1.2PB doesn't sound much for cloud-scale storage, and the pricing, compared to 1.2PB effective capacity of a disk storage, will be of great interest to potential customers. Also customers may well like the idea of larger clusters in the future and more individual node capacity. They'd also like cluster-to-cluster protection facilities

Accessing clients get their data across 10GbitE links and using a RESTful API. There are quality-of-service SLAs possible, and a node's data contents can be replicated to another for data protection.

Are the SSDs using fast single level cell (SLC) or slower but cheaper 2-bit multi-level cell (MLC) flash? We'd suppose there has to be MLC SSDs in there to keep the cost manageable.

That means that when 3-bit MLC comes along next year there could be an up to 50 per cent node capacity jump, assuming 2-bit MLC is used currently.

What we have here is a sixth all-flash, network-access, memory array supplier, alongside Nimbus, Solid Access, Texas Memory Systems, Violin Memory and Whiptail. Solid Access is a flash-based filer. Nimbus, TMS and Violin use flash cards while Whiptail and SolidFire use SSDs.

SolidFire pricing has not been revealed and availability could be by the end of the year or early in 2012. Potential customers can sign up for an early access program. ®

Remote control for virtualized desktops

More from The Register

next story
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
DEATH by COMMENTS: WordPress XSS vuln is BIGGEST for YEARS
Trio of XSS turns attackers into admins
prev story

Whitepapers

Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Go beyond APM with real-time IT operations analytics
How IT operations teams can harness the wealth of wire data already flowing through their environment for real-time operational intelligence.
How to determine if cloud backup is right for your servers
Two key factors, technical feasibility and TCO economics, that backup and IT operations managers should consider when assessing cloud backup.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?