This article is more than 1 year old

Why blade servers still don't cut it, and how they might

The blade manifesto

A more radical approach

I don't think that these projections and predictions by Gartner are all that unlikely, or even remotely radical. And I don't think these standardizations are sufficient, and I am advocating for a far more radical approach to future server designs that takes advantage of modular system components, the manufacturing scale and commoditization that comes from standardization, and an approach that allows these components to be used in tower, rack, or blade servers alike.

I have outlined many of these ideas before, and I am weaving them all together, plus a few new ones, into a single set of goals for the server industry, goals that lean heavily on the blade approach but allow for some of the peripheral expansion that is necessary and which still drives rack and tower server sales.

One socket to rule them all: The processor socket needs to be standardized across different processor architectures. I can envision a single, standardized interconnection - think of it as taking all of the best elements of Advanced Micro Devices' HyperTransport, Intel's QuickPath Interconnect, and IBM's GX interconnect at the heart of its Power and mainframe systems.

There is no reason why a socket cannot be created that allows X64, Power, Sparc, and Itanium processors to all plug into extra sockets and make use of a single standard interconnect. Instead of trying to standardize servers at the chip level, picking one architecture over another, such an approach would allow instruction set architectures to live on. Perhaps for longer than they might otherwise, in fact.

Modular motherboard designs: The genius of blade servers is that they take the elements of a rack server - servers, storage, networking switches, and such - and modularize them, shrink them, and connect them to each other over midplanes and internal wiring. It might be time to break down the motherboard into blade components and modularize them, too.

Imagine a system design that allowed chip sockets, chipsets, memory modules (meaning the actual memory slots), peripheral slots to be plugged together to make what would have been a single motherboard in the past. Imagine further that these motherboards could be created out of modular elements that allowed CPU sockets, memory, and I/O slots to be scaled up independently of each other by just plugging in extra modules. Think of a motherboard as being a 3D set of interconnected sub-blades instead of a single, 2D piece of motherboards with chips and slots mounted permanently and unchanging-ly on it.

Blade and sub-blade standards: To make the volume economics work in favor of the customers, I want blade server standards. And to be precise, I want innovation and standardization at the blade level and at the sub-blade level if that idea could come to pass. I want something like the set of standards for commercial blades, like those the telecommunications industry has had for a very long time now because it has power over the vendors thanks to the purse and long buying cycles.

It is apparent to me that commercial vendors do not like the telco blade standards, and for good reasons. First, the form factors are not consistent with data center form factors, and second, the lack of standards means a vendor who sells a half populated chassis to a customer can expect to sell the additional blades for the chassis as customers as computers for no sales costs. This account control is why server makers are in love with blades. But if they weren't so short-sighted concerning standards, the blade market might be five times as large already.

Common components for blades, racks, and towers: There is no reason why different server styles - blade, rack, and tower - could not be created from a variety of sub-blade components. One set of motherboard features makes many different kinds of servers, and there are volume economics that span server form factors and vendors - just like processors, memory chips, and I/O peripherals plugging in to PCI slots do today.

No more AC power inside the data center: The AC-to-DC conversions that go on inside the data center are idiotic. It is time for a set of DC standards for server gear, and AC should be the exception in the data center, not the rule. In tower servers and small blade chassis, obviously, AC wall power will prevail.

Integrated refrigerant cooling for key components: I said this back in early 2005, and I will say it again: as many watts of energy that might dissipate into the air as heat and then have to be recaptured by data center air conditioning should be trapped and moved out of the data center using water or other refrigerants linked directly into the external chillers outside of data centers. Air cooling is just too inefficient, particularly at the compute densities in modern data centers.

Yeah, I know. This is a lot of pie in the sky. But, you never get what you don't ask for in this world.

Copyright © 1996-2008 Guild Companies, Inc. All Rights Reserved. ®

More about

TIP US OFF

Send us news


Other stories you might like