Why do we need SANs?

Virtualised servers and DAS... ist gut

Providing a secure and efficient Helpdesk

Comment Why do we need SANs any more when many virtualised app and storage controller servers and virtualised storage can co-exist in a single set of racks?

Storage area networks (SANs) came into being so that many separate physical servers could each access a central storage facility at block level. They each saw their own LUN (Logical unit number) chunk of storage as being directly connected to them even though it was actually accessed through a channel, the Fibre Channel fabric tying the SAN together.

Nowadays our servers are are vastly more powerful through having multiple processors, typically X86 ones, connected to many sockets, with each processor having many cores, and each core capable of running several threads. The processor engines are managed and allocated to applications by a hypervisor engine such as VMware's ESX. Each core runs an application inside an O/S wrapper such as Windows or Linux and the application can be multi-threaded.

Blade Systems such as those from HP can cram more than a hundred cores in a collection of 1U rack shelves. These servers need to communicate, both to client systems, to storage facilities, and to the wider world via networking.

The local client access is a given; Ethernet rules. The storage access has been provided by a Fibre Channel SAN, with smaller enterprises using a cheaper and less complex and scalable iSCSI SAN or perhaps a filer or two. Filers have been remarkably successful in providing VMware storage, witness NetApp's wonderful business results over the past few quarters.

We now have storage in the mid-range that swings both ways; unified storage providing file and block access. This looks as if it makes the virtualised storage choice more difficult, but in fact a reunification initiative could come about by using the storage array controller engines to run applications.

A single server complex running everything

Before buying 3PAR, HP had the intent of providing a common storage facility for block and file access by using commodity disk drive shelves and commodity Intel storage processing blade servers. These would run a storage personality such as the EVA and Left Hand Networks storage array operating systems, file access, and also storage applications such as deduplication or replication.

This neat scheme was spoiled by 3PAR since it relies on an ASIC in its storage controllers. Perhaps HP will return to it though by having two classes of storage processors; commodity X86 blades on the one hand and a 3PAR ASIC blade on the other.

The point of this neat scheme is that the storage is basically servers with directly attached storage (DAS). Suppose … just suppose those servers ran not storage operating systems but VMDKs, virtual machines running storage operating systems, such as the LeftHand Virtual Storage Appliance. Then suppose we add more servers, 6- and 8-core processing engines, to the same server complex and have them running application software. Why after all have two separate X86 server complexes?

In other words we have a single set of servers, possibly a hundred or more processors, with some running applications such as databases, and accounts payable and manufacturing requirements processing, and others running the storage controller functions needed by those applications, all mediated by a single ESX hypervisor, which provisions virtual machines (VMs) as needed by either the application load or the storage load.

We need a communications system to tie the storage processors and the storage enclosures together, a highly scalable and fast link. Ethernet? Maybe, but Infiniband or something like EMC's Rapid IO or even a SAS switching system or externalised PCIe bus (think Virtensys) if they can scale come to mind.

Ethernet LAN faciities connect local clients to this server+storage complex. Networking to the wider world? It's IP-based and it could be done by having Ethernet switch software running in the ESX system, such as Cisco's Nexus ESX plug-in.

DAS ist gut, ja?

Who could produce such a SAN-free system?

HP comes to mind straightaway. IBM comes to mind as well and Oracle is surely in the frame. Dell must be another candidate. NetApp couldn't do this, being a classic stick-to-its-storage-knitting company.

Hitachi Data Systems can't, not with its separate NAS supply deal through the BlueArc partnership. However, it could if it agreed the vision and drove single-mindedly to realise it using Hitachi servers and networking products integrated with its AMS and VSP storage arrays. This would take, say, three to five years for HDS to produce the integrated app server/storage server controller goods.

Could Fujitsu do this? Maybe but it would take years and a lot of forceful central direction. NEC? No way. Acer? It isn't enterprise enough, not for now.

It makes your mouth water to think of this; a SAN-free server and storage complex running end-user and storage controller and network controller applications as VMDKs under a hypervisor, with, of course, an integrated management facility.

So, we just reinvented the mainframe. If this concept comes to pass, and it works and the pricing is right, then stand-alone, best-of breed storage and networking suppliers will be out in the cold as far as selling to customers who adopt this reinvented mainframe approach.

Any such supplier that wants to sell to such customers had better get all the pieces it needs in-house. Perhaps it should buy Red Hat for a start and own its own hypervisor technology. Are you listening Cisco, EMC, Juniper and NetApp? What do you think? ®

Security for virtualized datacentres

More from The Register

next story
Wanna keep your data for 1,000 YEARS? No? Hard luck, HDS wants you to anyway
Combine Blu-ray and M-DISC and you get this monster
US boffins demo 'twisted radio' mux
OAM takes wireless signals to 32 Gbps
Apple flops out 2FA for iCloud in bid to stop future nude selfie leaks
Millions of 4chan users howl with laughter as Cupertino slams stable door
No biggie: EMC's XtremIO firmware upgrade 'will wipe data'
But it'll have no impact and will be seamless, we're told
Students playing with impressive racks? Yes, it's cluster comp time
The most comprehensive coverage the world has ever seen. Ever
Run little spreadsheet, run! IBM's Watson is coming to gobble you up
Big Blue's big super's big appetite for big data in big clouds for big analytics
prev story


Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.
Protecting users from Firesheep and other Sidejacking attacks with SSL
Discussing the vulnerabilities inherent in Wi-Fi networks, and how using TLS/SSL for your entire site will assure security.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.