Feeds

Why do we need SANs?

Virtualised servers and DAS... ist gut

Boost IT visibility and business value

Comment Why do we need SANs any more when many virtualised app and storage controller servers and virtualised storage can co-exist in a single set of racks?

Storage area networks (SANs) came into being so that many separate physical servers could each access a central storage facility at block level. They each saw their own LUN (Logical unit number) chunk of storage as being directly connected to them even though it was actually accessed through a channel, the Fibre Channel fabric tying the SAN together.

Nowadays our servers are are vastly more powerful through having multiple processors, typically X86 ones, connected to many sockets, with each processor having many cores, and each core capable of running several threads. The processor engines are managed and allocated to applications by a hypervisor engine such as VMware's ESX. Each core runs an application inside an O/S wrapper such as Windows or Linux and the application can be multi-threaded.

Blade Systems such as those from HP can cram more than a hundred cores in a collection of 1U rack shelves. These servers need to communicate, both to client systems, to storage facilities, and to the wider world via networking.

The local client access is a given; Ethernet rules. The storage access has been provided by a Fibre Channel SAN, with smaller enterprises using a cheaper and less complex and scalable iSCSI SAN or perhaps a filer or two. Filers have been remarkably successful in providing VMware storage, witness NetApp's wonderful business results over the past few quarters.

We now have storage in the mid-range that swings both ways; unified storage providing file and block access. This looks as if it makes the virtualised storage choice more difficult, but in fact a reunification initiative could come about by using the storage array controller engines to run applications.

A single server complex running everything

Before buying 3PAR, HP had the intent of providing a common storage facility for block and file access by using commodity disk drive shelves and commodity Intel storage processing blade servers. These would run a storage personality such as the EVA and Left Hand Networks storage array operating systems, file access, and also storage applications such as deduplication or replication.

This neat scheme was spoiled by 3PAR since it relies on an ASIC in its storage controllers. Perhaps HP will return to it though by having two classes of storage processors; commodity X86 blades on the one hand and a 3PAR ASIC blade on the other.

The point of this neat scheme is that the storage is basically servers with directly attached storage (DAS). Suppose … just suppose those servers ran not storage operating systems but VMDKs, virtual machines running storage operating systems, such as the LeftHand Virtual Storage Appliance. Then suppose we add more servers, 6- and 8-core processing engines, to the same server complex and have them running application software. Why after all have two separate X86 server complexes?

In other words we have a single set of servers, possibly a hundred or more processors, with some running applications such as databases, and accounts payable and manufacturing requirements processing, and others running the storage controller functions needed by those applications, all mediated by a single ESX hypervisor, which provisions virtual machines (VMs) as needed by either the application load or the storage load.

We need a communications system to tie the storage processors and the storage enclosures together, a highly scalable and fast link. Ethernet? Maybe, but Infiniband or something like EMC's Rapid IO or even a SAS switching system or externalised PCIe bus (think Virtensys) if they can scale come to mind.

Ethernet LAN faciities connect local clients to this server+storage complex. Networking to the wider world? It's IP-based and it could be done by having Ethernet switch software running in the ESX system, such as Cisco's Nexus ESX plug-in.

DAS ist gut, ja?

Who could produce such a SAN-free system?

HP comes to mind straightaway. IBM comes to mind as well and Oracle is surely in the frame. Dell must be another candidate. NetApp couldn't do this, being a classic stick-to-its-storage-knitting company.

Hitachi Data Systems can't, not with its separate NAS supply deal through the BlueArc partnership. However, it could if it agreed the vision and drove single-mindedly to realise it using Hitachi servers and networking products integrated with its AMS and VSP storage arrays. This would take, say, three to five years for HDS to produce the integrated app server/storage server controller goods.

Could Fujitsu do this? Maybe but it would take years and a lot of forceful central direction. NEC? No way. Acer? It isn't enterprise enough, not for now.

It makes your mouth water to think of this; a SAN-free server and storage complex running end-user and storage controller and network controller applications as VMDKs under a hypervisor, with, of course, an integrated management facility.

So, we just reinvented the mainframe. If this concept comes to pass, and it works and the pricing is right, then stand-alone, best-of breed storage and networking suppliers will be out in the cold as far as selling to customers who adopt this reinvented mainframe approach.

Any such supplier that wants to sell to such customers had better get all the pieces it needs in-house. Perhaps it should buy Red Hat for a start and own its own hypervisor technology. Are you listening Cisco, EMC, Juniper and NetApp? What do you think? ®

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
Docker kicks KVM's butt in IBM tests
Big Blue finds containers are speedy, but may not have much room to improve
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Gartner's Special Report: Should you believe the hype?
Enough hot air to carry a balloon to the Moon
Flash could be CHEAPER than SAS DISK? Come off it, NetApp
Stats analysis reckons we'll hit that point in just three years
Dell The Man shrieks: 'We've got a Bitcoin order, we've got a Bitcoin order'
$50k of PowerEdge servers? That'll be 85 coins in digi-dosh
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.