Feeds

Fusion-io straps on NetApp feedbag for cache feeding frenzy

Virtual flash storage insanity

Next gen security for virtualised datacentres

NetApp and Fusion-io are working to link flash-accelerated servers with NetApp arrays and dynamically move data from the arrays to Fusion's ioMemory flash cards in the servers using NetApp's VST technology.

Fusion has issued a statement about this, saying its own server caching software will be involved as well. VST is NetApp's Virtual Storage Tiering and involves the use of flash as a cache within an array (Flash Cache) or a volume (Flash Pool) to accelerate I/O without the complexities of having separate tiers of storage with complex software moving data between the tiers to get the optimal mix of performance and storage cost-efficiency. Every other mainstream storage array vendor has embraced such tiering.

NetApp has said that it will extend VST outside its arrays and this is the first tangible indication it is doing so. NetApp has said it will work with third-party server flash cache suppliers – and here it is, working with Fusion-io.

Tim Russell, the Data Lifecycle Ecosystem Group VP at NetApp, was quoted in the release: "Fusion-io provides leading data acceleration technology that, in combination with technologies in our Virtual Storage Tier, such as Flash Cache and Flash Pool, will enable rapid, low-latency assessment of workload priorities, resulting in low-cost high performance solutions to the exponential data growth our customers face today."

Fusion's CTO, Neil Carson, was quoted in the release as well: "We believe software-defined solutions will be able to deliver much greater efficiency for customers, enabling them to do more than ever before at a fraction of the cost of legacy systems."

From NetApp's point of view, having its networked array feed individual server caches is great: no risk of sales of server caches cannibalising its array sales. Fusion is the market leader in server flash cache cards and has OEM deals with Cisco, Dell, IBM and others which is good news for NetApp as well.

HP though has said it will link its 3PAR arrays to gen 8 ProLiant servers SmartCache, so NetApp won't have a clear run there. There is obvious potential for the FlexPod converged Cisco server and networking and NetApp storage array systems to use UCS blades enhanced with Fusion flash fed by NetApp arrays.

The faster NetApp can get this cache feed linkage working, the better. ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
Like condoms, data now comes in big and HUGE sizes
Linux Foundation lights a fire under storage devs with new conference
Community chest: Storage firms need to pay open-source debts
Samba implementation? Time to get some devs on the job
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up Big Data
Solving backup challenges and “protect everything from everywhere,” as we move into the era of big data management and the adoption of BYOD.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?