Feeds

StorSpeed pushes delicious filer flash cacher

Issues passionate wish for success

Boost IT visibility and business value

Startup Storspeed has launched an application-aware clustered file caching product: the SP5000.

The aim is to correct a claimed imbalance between a server's hunger for data and the storage subsystem's inability to deliver it fast enough. Like Avere and Symantec with FileStore, and Dataram for block access, StorSpeed says it wants to stop the resulting over-provisioning of the storage subsystem.

The SP5000 product set sits in-band, in front of backend file storage arrays, and tracks and caches NFS and CIFS packets flowing from them. There are three components: the FD1200 flow director box, clustered SP5000 caching nodes and a System Manager.

The FD1200 processes 360m million packets per second. It has 24 non-blocking ports and one FD1200 can fail-over to another. The Flow Director looks after client and storage connectivity, filtering and distribution of workflows, and cluster connectivity between SP5000s.

An SP5000 has special Field Programmable Gate Array (FPGA) hardware - shades of BlueArc - 80GB of DRAM and up to eight 2.5-inch SAS drives or solid state drives in its 2U enclosure. The SSDs are the main focus here. With two processors and 64-thread support, the SP5000 can output 350,000 IOPS with a 10Gbit/s bandwidth.

The components are redundant and the Flow Director provides protection against SP5000 node failure. Up to six SP5000 nodes can be clustered, with a 10gigE cluster interconnect, to enable "data replication on writes and pass-through operation in case of cluster failure, eliminating downtime." There is a theoretical limit of 256 nodes and the clustering takes performance up to a million IOPS and beyond.

The web-based System Manager application "provides a simple view of information such as filer latency and throughput, most active files, file sizes and distributions, SP5000 latency and throughput, cache utilization, hit and miss ratio, and profile effectiveness."

The system supports both 1gigE and 10gigE connectivity. The back-end filers can be stand-alone, clustered or even in the cloud.

The caching is intelligent. The system looks at the packet data flow between servers and backend storage and identifies what is called an active data set. Data in this set is cached or not according to sys admin-set performance profiles. These say which data is to be accelerated (cached) and which not in terms of application, storage attributes, client, and network parameters. Administrators "can set up performance profiles for applications, clients, protocols, file types, file sizes,...a combination of network and file parameters, (and during) scheduled run-times."

The idea is that filers can use bulk storage SATA disks at high utilisation and let the StorSpeed caches provide the performance that would otherwise require lots of fast disks, and short-stroked fast disks, too, in extreme circumstances. The automated hot data set identification is said to help sys admins and avoid the manual profiling of data traffic between servers and filers.

All of a sudden, there are more filer caching devices on the market. Even NetApp is working along this direction with its Performance Acceleration Module (PAM) card for its controllers. The StorSpeed (and Avere and FileStore) advantage here is that it applies to multiple and heterogeneous filers.

The SP5000 is available now, and pricing starts at $65,000. For that, you get 80GB of DRAM, a suite of management and reporting software, and eight drive bays for flash-based SSDs. ®

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
Docker kicks KVM's butt in IBM tests
Big Blue finds containers are speedy, but may not have much room to improve
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Gartner's Special Report: Should you believe the hype?
Enough hot air to carry a balloon to the Moon
Flash could be CHEAPER than SAS DISK? Come off it, NetApp
Stats analysis reckons we'll hit that point in just three years
Dell The Man shrieks: 'We've got a Bitcoin order, we've got a Bitcoin order'
$50k of PowerEdge servers? That'll be 85 coins in digi-dosh
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.