Feeds

Understanding what's going on in storage arrays is like doing MAGIC

Software defined storage? It's a load of bollocks

Providing a secure and efficient Helpdesk

Storagebod As the announcements and acquisitions which fall into the realms of Software Defined Storage – or "storage", as I like to call it – continue to come, one starts to ponder how this is all going to work in the real world.

I think it is extremely important to remember that you are going to need hardware to run this software on. While current trends show this moving towards a commodity model, there are going to be subtle differences that are going to need accounting for. And as we move down this track, there is going to be a real focus on understanding workloads and the impact of different infrastructure and infrastructure patterns on this.

I am seeing more and more products that enable DAS to work as a shared-storage resource, removing the SAN from the infrastructure and reducing the complexity. I am going to argue that this does not necessarily remove complexity but it shifts it. In fact, it doesn't remove the SAN at all – it just changes it.

It is not uncommon now to see storage vendor presentations that show Shared-Nothing-Cluster architectures in some form or another; often these are software and hardware "packaged" solutions. Yet as end-users start to demand the ability to deploy on their own hardware, this brings a whole new world of unknown behaviours into play.

Once vendors relinquish control of the underlying infrastructure, the software is going to have to become a lot more intelligent, and the end-user implementation teams are going to have to start thinking more like the hardware teams in vendors.

For example, the East-West traffic models in your data-centre become even more important. You might find yourself implementing low-latency storage networks; your new SAN is no longer a North-South model but Server-Server (East-West). This is something that the virtualisation guys have been dealing with for some time.

Then there's understanding performance and failure domains: do you protect the local DAS with RAID or move to a distributed RAIN (Redundant Array of Independent Nodes) model? If you do something like aggregate the storage on your compute farm into one big pool, what is the impact if one node in the compute farm starts to come under load? Can it impact the performance of the whole pool?

Anyone who has worked with any kind of distributed storage model will tell you that a slow performing node – or a failing node – can have impacts which far exceed what you'd have believed possible. At times, it can feel like the good old days of token ring, where a single misconfigured interface can kill the performance for everyone. Forget about the impact of a duplicate IP address – that's nothing compared to this.

So what is the impact of the failure of a single compute/storage node? What about if multiple compute/storage nodes go down?

In the past, this was all handled by the storage hardware vendor and, pretty much invisibly, at implementation phase by the local storage team. But you will need now to make decisions about how data is protected and understand the impact of replication.

In theory, you want your data as close to the processing as you can get it, but data has weight and persistence; it will have to move. Or do you come up with a method that allows a dynamic infrastructure that identifies where data is located and spins/moves the compute to it?

The vendors are going to have to improve their instrumentation as well. Let me tell you from experience that understanding what is going on in such environments is deep magic. Also the software's ability to cope with the differing capabilities and vagaries of a large-scale commodity infrastructure is going to be have to be a lot more robust than it is today.

Yet I see a lot of activity from vendors, open-source and closed-source, and I see a lot of interest from the large storage consumers. This all points towards a large prize to be won. But I'm expecting to see a lot of people fall by the road.

It's an interesting time to be in storage... ®

Security for virtualized datacentres

More from The Register

next story
Wanna keep your data for 1,000 YEARS? No? Hard luck, HDS wants you to anyway
Combine Blu-ray and M-DISC and you get this monster
US boffins demo 'twisted radio' mux
OAM takes wireless signals to 32 Gbps
Apple flops out 2FA for iCloud in bid to stop future nude selfie leaks
Millions of 4chan users howl with laughter as Cupertino slams stable door
No biggie: EMC's XtremIO firmware upgrade 'will wipe data'
But it'll have no impact and will be seamless, we're told
Students playing with impressive racks? Yes, it's cluster comp time
The most comprehensive coverage the world has ever seen. Ever
prev story

Whitepapers

Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.
Protecting users from Firesheep and other Sidejacking attacks with SSL
Discussing the vulnerabilities inherent in Wi-Fi networks, and how using TLS/SSL for your entire site will assure security.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.