Original URL: https://www.theregister.com/2008/09/23/blade_server_standards/

Why blade servers still don't cut it, and how they might

The blade manifesto

By Timothy Prickett Morgan

Posted in Channel, 23rd September 2008 12:02 GMT

Sometimes, a good idea just doesn't take off. OK, this is information technology, not philosophy, so let me rephrase that more accurately.

Sometimes, ideas and habits that were once laudable have an immense inertia that prevents a new and perhaps better idea from building momentum in the market. Such is the case with the wonderful idea of blade servers. But there are times when a standard, however begrudgingly adopted by IT vendors, can overcome that inertia.

Every couple of years, the situation with blade servers boils my blood a little bit, like a squandered opportunity does for most of us. Here we are in 2008, eight years after commercial blade servers first came to market, and I find myself hoping that we are on the cusp of a new wave of innovation that will finally bring the standardization that will make blade architectures the norm, not still the exception, in the data centers and data closets of the world. Hope springs eternal, just like frustration.

What set my blood to warming again about blades recently was a set of projections from the analysts at Gartner, who released a report that said blade servers were the fastest-growing part of the server space, but that the lack of standards and rapid change in the underlying technology inside blade servers are limited the adoption of blade servers.

This is something I have been complaining about since day one in the blade server space - in fact, before Compaq's "QuickBlade" and Hewlett-Packard's "Powerbar" blade servers even came to market. So have other analysts - including those at Gartner - and so have customers. And, because money talks in IT, the blame for the lack of standards can be placed squarely at the feet of end users, who after surviving decades of vendor lock-in for operating systems and servers should know better.

Non-standards

But, in the defense of end users, blade servers came out when the IT market was entering a recession after a big boom, and the data center loading, price/performance, and administrative issues IT departments were facing made us accept non-standard blade equipment rather than forcing vendors to produce better standards.

The same thing happened in the recessions of the late 1980s and early 1990s, which sparked a move from proprietary minis and mainframes to Unix machines with incompatible but standards-based operating systems. Common Unix APIs and functionality were better than no standards at all, and RISC iron was cheaper because of competition, so it was as good as it was going to get. Or, more precisely, it was as good as IT vendors were going to let it get until more customer pressure came to bear.

In the Gartner report, the analysts reminded IT shops of some projections it has made recently. In 2007, Gartner reckons that blade servers represented about 10 per cent of shipments, and between 2007 and 2012 the company expects that blade shipments will grow at a compound annual growth rate of 19 per cent to represent 20 per cent of total server shipments by 2012.

This is not, as many had said back in the dawn of the blade server era in 2000, the same kind of adoption rate seen by rack-mounted servers. Rack servers pretty much took over the data centers of the world because of standardized form factors and density in the span of a few years in the late 1990s, and towers basically persist within small businesses and as departmental machines within larger organizations.

Blades could have had a 50 per cent or higher share of the market years ago, provided there were standards for blade and chassis form factors, inter-connectivity of peripherals like switches, and common and open APIs for blade software management software. And that would have killed profits, so it didn't happen. Not one of the few remaining blade players - who are the brand name rack and tower server makers - wanted standardization to happen.

"We are not suggesting that IT organizations stay away from blades - blades do address many problems in the data center," explained Andrew Butler, a vice president and distinguished analyst (do you get extra pay for two titles?) at Gartner who put together the projections.

"What we are saying is that IT organizations adopting blades need to be prepared for further changes in this technology. Blade servers have been a rapidly changing technology, and we fully expect this to continue, particularly during the next five years."

Order of the Gartner

Gartner's report, entitled Blade Servers: The Five Year Outlook, offers a number of predictions for the near term and the longer term. Here they are:

A more radical approach

I don't think that these projections and predictions by Gartner are all that unlikely, or even remotely radical. And I don't think these standardizations are sufficient, and I am advocating for a far more radical approach to future server designs that takes advantage of modular system components, the manufacturing scale and commoditization that comes from standardization, and an approach that allows these components to be used in tower, rack, or blade servers alike.

I have outlined many of these ideas before, and I am weaving them all together, plus a few new ones, into a single set of goals for the server industry, goals that lean heavily on the blade approach but allow for some of the peripheral expansion that is necessary and which still drives rack and tower server sales.

One socket to rule them all: The processor socket needs to be standardized across different processor architectures. I can envision a single, standardized interconnection - think of it as taking all of the best elements of Advanced Micro Devices' HyperTransport, Intel's QuickPath Interconnect, and IBM's GX interconnect at the heart of its Power and mainframe systems.

There is no reason why a socket cannot be created that allows X64, Power, Sparc, and Itanium processors to all plug into extra sockets and make use of a single standard interconnect. Instead of trying to standardize servers at the chip level, picking one architecture over another, such an approach would allow instruction set architectures to live on. Perhaps for longer than they might otherwise, in fact.

Modular motherboard designs: The genius of blade servers is that they take the elements of a rack server - servers, storage, networking switches, and such - and modularize them, shrink them, and connect them to each other over midplanes and internal wiring. It might be time to break down the motherboard into blade components and modularize them, too.

Imagine a system design that allowed chip sockets, chipsets, memory modules (meaning the actual memory slots), peripheral slots to be plugged together to make what would have been a single motherboard in the past. Imagine further that these motherboards could be created out of modular elements that allowed CPU sockets, memory, and I/O slots to be scaled up independently of each other by just plugging in extra modules. Think of a motherboard as being a 3D set of interconnected sub-blades instead of a single, 2D piece of motherboards with chips and slots mounted permanently and unchanging-ly on it.

Blade and sub-blade standards: To make the volume economics work in favor of the customers, I want blade server standards. And to be precise, I want innovation and standardization at the blade level and at the sub-blade level if that idea could come to pass. I want something like the set of standards for commercial blades, like those the telecommunications industry has had for a very long time now because it has power over the vendors thanks to the purse and long buying cycles.

It is apparent to me that commercial vendors do not like the telco blade standards, and for good reasons. First, the form factors are not consistent with data center form factors, and second, the lack of standards means a vendor who sells a half populated chassis to a customer can expect to sell the additional blades for the chassis as customers as computers for no sales costs. This account control is why server makers are in love with blades. But if they weren't so short-sighted concerning standards, the blade market might be five times as large already.

Common components for blades, racks, and towers: There is no reason why different server styles - blade, rack, and tower - could not be created from a variety of sub-blade components. One set of motherboard features makes many different kinds of servers, and there are volume economics that span server form factors and vendors - just like processors, memory chips, and I/O peripherals plugging in to PCI slots do today.

No more AC power inside the data center: The AC-to-DC conversions that go on inside the data center are idiotic. It is time for a set of DC standards for server gear, and AC should be the exception in the data center, not the rule. In tower servers and small blade chassis, obviously, AC wall power will prevail.

Integrated refrigerant cooling for key components: I said this back in early 2005, and I will say it again: as many watts of energy that might dissipate into the air as heat and then have to be recaptured by data center air conditioning should be trapped and moved out of the data center using water or other refrigerants linked directly into the external chillers outside of data centers. Air cooling is just too inefficient, particularly at the compute densities in modern data centers.

Yeah, I know. This is a lot of pie in the sky. But, you never get what you don't ask for in this world.

Copyright © 1996-2008 Guild Companies, Inc. All Rights Reserved. ®