Feeds

NetApp's dangerous dance with the killer brontosaurus - Amazon

One wrong step and you're paste

Remote control for virtualized desktops

Blocks+Files What is NetApp up to, partnering with a cloud IT provider, Amazon, that's positioned long-term to try and annihilate NetApp's business?

Let's start from the position that you are a NetApp customer with your own FAS array and you want to use Amazon Web Services, the AWS cloud computing facility, on some data in that array. How do you get it up into the AWS cloud?

One way is to move that data into the cloud and leave it there, front-ending it with a cloud storage gateway in your own data centre so you get fast local access to its cached cloud-stored data. But that means committing data storage to the cloud and you may not want to do that. There's also the problem of loading potentially multiple tens of terabytes, even hundreds of them, into the cloud in the first place. Sending a truckful of disks is often the most effective way of doing that.

If you wish to keep your master data on your own premises where it is constantly being updated, and have AWS work on up-to date versions of that data then you have a problem. NetApp has an answer which involves an Amazon co-location facility, a second FAS array, and replication of data from your own FAS to the Colo FAS. It's called NetApp Private Storage for AWS.

First, a second FAS array is placed in a co-location (colo) facility which has a Direct Connect link to AWS. Then data is replicated, using SnapMirror and SnapVault, between your on-premise FAS array to the colo FAS, and AWS EC2 and so forth gets to process it using the Direct Connect link. Updated data gets replicated back to the on-premise FAS. There's a NetApp blog about it here.

So, to take advantage of pay-as-you-go, on-demand and elastic AWS, you have to have a second FAS array in a colo centre and buy the SnapMirror and SnapVault licences, and pay for the network link to the colo centre. Logically you are paying for and placing a non-cloud FAS into the AWS infrastructure as an AWS edge device.

Why? Why do you need this? Sure you get a business continuity/disaster recovery FAS array and AWS compute stack, and if that's what you want, fine. But as a way of getting AWS compute to work on your FAS data it looks expensive. Also it's only available in America right now, with geographic region expansion coming.

Direct Connect

Looking at Amazon Direct Connect we read:-

AWS Direct Connect makes it easy to establish a dedicated network connection from your premise to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.

AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations.

A Direct Connect FAQ states:-

Q. Can I use AWS Direct Connect if my network is not present at an AWS Direct Connect location?

Yes. APN Partners supporting AWS Direct Connect can help you extend your pre-existing data centre or office network to an AWS Direct Connect location …

Why not then cut out the colo FAS and replication software and have Direct Connect link directly to your on-premise FAS?

There may be a faster network link between the AWS Direct Connect-equipped colo than between it and your own data centre. If so the costs and feeds of the two approaches need comparing to see which suits you best.

Endgames

We might imagine Amazon is talking to EMC, Dell, HDS, HP and IBM about similar partnering arrangements, and why not? Every mainstream storage supplier partnership increases the credibility of AWS. The endgame for Amazon is surely the storage of all enterprise data in its cloud and NetApp is dancing with the enemy - yet dancing very well. It's endgame is NetApp survival, and growth, as a continuing and relevant storage supplier to enterprises of all sizes.

Is there room in the storage market for the cloud storage suppliers and the mainstream storage array suppliers? Cloudy storage folk like Amazon, Google, Microsoft, Rackspace and others want to store data that's sitting in storage arrays right now and have no end to their ambition, which could take ten, fifteen or more years to fulfil. Every byte they store is a byte not stored on a mainstream array.

There are smaller cloud service providers, second tier ones, who do not have the ability to build their own storage as Amazon and Google do. For example PeakColo of Colorado, and they will buy mainstream vendors' storage; PeakColo has petabytes of NetApp kit. These providers will exist under Amazon and Google's shadow.

All the mainstream storage suppliers have the same problem as NetApp. They face their total addressable market potentially; let's stress "potentially", shrinking because Amazon and Google-class cloud storage suppliers take more and more of it away from them. How do they maintain and grow their businesses in the face of Brontosaurus-sized behemoths like Amazon and Google invading their feeding grounds?

NetApp could, for example, turn its V-Series into a generic cloud storage gateway, as well as supplying kit to second tier CSPs. Partnering is one way to get to know your enemy and buy time to work out a strategy, and that strategy is crucial, dealing as it does with a threat that could annihilate your business. ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
prev story

Whitepapers

Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
5 critical considerations for enterprise cloud backup
Key considerations when evaluating cloud backup solutions to ensure adequate protection security and availability of enterprise data.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Business security measures using SSL
Examines the major types of threats to information security that businesses face today and the techniques for mitigating those threats.