Feeds

Why do we need SANs?

Virtualised servers and DAS... ist gut

Next gen security for virtualised datacentres

Comment Why do we need SANs any more when many virtualised app and storage controller servers and virtualised storage can co-exist in a single set of racks?

Storage area networks (SANs) came into being so that many separate physical servers could each access a central storage facility at block level. They each saw their own LUN (Logical unit number) chunk of storage as being directly connected to them even though it was actually accessed through a channel, the Fibre Channel fabric tying the SAN together.

Nowadays our servers are are vastly more powerful through having multiple processors, typically X86 ones, connected to many sockets, with each processor having many cores, and each core capable of running several threads. The processor engines are managed and allocated to applications by a hypervisor engine such as VMware's ESX. Each core runs an application inside an O/S wrapper such as Windows or Linux and the application can be multi-threaded.

Blade Systems such as those from HP can cram more than a hundred cores in a collection of 1U rack shelves. These servers need to communicate, both to client systems, to storage facilities, and to the wider world via networking.

The local client access is a given; Ethernet rules. The storage access has been provided by a Fibre Channel SAN, with smaller enterprises using a cheaper and less complex and scalable iSCSI SAN or perhaps a filer or two. Filers have been remarkably successful in providing VMware storage, witness NetApp's wonderful business results over the past few quarters.

We now have storage in the mid-range that swings both ways; unified storage providing file and block access. This looks as if it makes the virtualised storage choice more difficult, but in fact a reunification initiative could come about by using the storage array controller engines to run applications.

A single server complex running everything

Before buying 3PAR, HP had the intent of providing a common storage facility for block and file access by using commodity disk drive shelves and commodity Intel storage processing blade servers. These would run a storage personality such as the EVA and Left Hand Networks storage array operating systems, file access, and also storage applications such as deduplication or replication.

This neat scheme was spoiled by 3PAR since it relies on an ASIC in its storage controllers. Perhaps HP will return to it though by having two classes of storage processors; commodity X86 blades on the one hand and a 3PAR ASIC blade on the other.

The point of this neat scheme is that the storage is basically servers with directly attached storage (DAS). Suppose … just suppose those servers ran not storage operating systems but VMDKs, virtual machines running storage operating systems, such as the LeftHand Virtual Storage Appliance. Then suppose we add more servers, 6- and 8-core processing engines, to the same server complex and have them running application software. Why after all have two separate X86 server complexes?

In other words we have a single set of servers, possibly a hundred or more processors, with some running applications such as databases, and accounts payable and manufacturing requirements processing, and others running the storage controller functions needed by those applications, all mediated by a single ESX hypervisor, which provisions virtual machines (VMs) as needed by either the application load or the storage load.

We need a communications system to tie the storage processors and the storage enclosures together, a highly scalable and fast link. Ethernet? Maybe, but Infiniband or something like EMC's Rapid IO or even a SAS switching system or externalised PCIe bus (think Virtensys) if they can scale come to mind.

Ethernet LAN faciities connect local clients to this server+storage complex. Networking to the wider world? It's IP-based and it could be done by having Ethernet switch software running in the ESX system, such as Cisco's Nexus ESX plug-in.

DAS ist gut, ja?

Who could produce such a SAN-free system?

HP comes to mind straightaway. IBM comes to mind as well and Oracle is surely in the frame. Dell must be another candidate. NetApp couldn't do this, being a classic stick-to-its-storage-knitting company.

Hitachi Data Systems can't, not with its separate NAS supply deal through the BlueArc partnership. However, it could if it agreed the vision and drove single-mindedly to realise it using Hitachi servers and networking products integrated with its AMS and VSP storage arrays. This would take, say, three to five years for HDS to produce the integrated app server/storage server controller goods.

Could Fujitsu do this? Maybe but it would take years and a lot of forceful central direction. NEC? No way. Acer? It isn't enterprise enough, not for now.

It makes your mouth water to think of this; a SAN-free server and storage complex running end-user and storage controller and network controller applications as VMDKs under a hypervisor, with, of course, an integrated management facility.

So, we just reinvented the mainframe. If this concept comes to pass, and it works and the pricing is right, then stand-alone, best-of breed storage and networking suppliers will be out in the cold as far as selling to customers who adopt this reinvented mainframe approach.

Any such supplier that wants to sell to such customers had better get all the pieces it needs in-house. Perhaps it should buy Red Hat for a start and own its own hypervisor technology. Are you listening Cisco, EMC, Juniper and NetApp? What do you think? ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
Community chest: Storage firms need to pay open-source debts
Samba implementation? Time to get some devs on the job
Like condoms, data now comes in big and HUGE sizes
Linux Foundation lights a fire under storage devs with new conference
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up Big Data
Solving backup challenges and “protect everything from everywhere,” as we move into the era of big data management and the adoption of BYOD.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?