Blade servers 101
The pluses and minuses
How are blade servers different from their rack-mounted counterparts?
The blade server trend started about ten years ago when RLX launched its system of servers built into a chassis that slotted into standard 19-inch racks.
The idea is that you can install a blade server or any other type of device that would fit into a server chassis, such as a management blade, and it connects to the network and other services automatically.
Blade servers consist of a large motherboard with a chunky connector at the back and few components on board besides CPU and memory. You might find a USB port or two for connecting a keyboard and mouse, although blade servers, like any others, are more likely to be directly managed over the network using a KVM system.
Designed to be easily interchangeable they plug into the backplane, which is part of the chassis and is usually a passive bus that connects between the server and the I/O subsystems at the back of the chassis.
The chassis provides all common services such as cooling, power and connectivity, usually provided by replaceable modules so you can tailor the chassis to the type of work the servers are doing.
Sharing services in this way makes each server more power efficient. They are also easy to manage, with a single pane of glass.
You get a lot of computing power out of the floor space they occupy
That said, each blade might include one or two hard disks to boot from, although it makes more sense to have them boot from a common image on the SAN: the more there is stuff specific to one blade, the less advantage you gain from the concept.
You are also generating extra heat and increasing power consumption and complexity. Even so, you might want some local storage, depending on the application, and maybe a DVD drive for software installation purposes.
A key advantage of blade servers is that they are very dense: you get a lot of computing power out of the floor space they occupy.
Acer blade chassis
Look, no wires
And because several servers – up to 16 in HP's c-Class BladeCenter chassis – can be fitted into one enclosure, cabling is reduced dramatically. Just one cable set can handle the I/O functionality where previously there would had been 16, although in practice it is not quite as simple because of the need for redundant components and additional I/O.
Another advantage of blades is that they are usually hot-swappable: if one fails, you pull it out and replace it with another.
In a fully managed, automated data centre, the system is able to configure a new blade so that it is logically identical to the old one, with the same MAC and IP addresses, operating system and applications.
Almost every removeable component in a chassis is replaceable while in operation. Hot swapping means greater reliability: if one system breaks, there is another ready to can take over.
Data centre admins can replace the failed part at their leisure rather than under the pressure of knowing that a service has just ceased and end-users are cursing at their desks.
Because you're worth it
When first conceived, blades were seen as good for serving web pages in dense web server farms but their function has expanded so that they can now perform almost any server task.
That said, most blades are probably still running a single application which suits their interchangeability, although the trend towards consolidation through virtualisation looks set to change that.
Blades offer the advantages of manageability, ease of deployment and space efficiency. You pay extra for that convenience, but for many the price is well worth it.
Some of the advantages claimed for blades are also their drawbacks. One of the biggest is that once you have bought a blade chassis, each blade must come from a single vendor.
Effectively, an enterprise is buying a complete computing system from a single source, with a complete lack of choice. The vendor has you locked in.
It is also hard to get out. There is no standard blade server chassis, nor does it look likely that there will ever be one as vendors have no incentive to produce it.
That means one vendor's products won't fit into those of another, which flies in the face of the trend towards standardisation and commoditisation that has done so much to lower the cost of technology over the past 30 years.
So even though the individual blade servers look affordable next to a rack-mounted server, when you add the cost of the chassis and common services the price per server will be higher than that of the equivalent standalone system – and the user has no other options.
That is especially so because blades make economic sense only if you keep the chassis full, spreading the cost of the chassis and other components across the largest possible number of blades.
Blades are highly dense systems which save expensive floor space but in practice this can be a big disadvantage – perhaps the biggest.
What usually limits the number of blades you can slot into a rack is not how many you can get in a chassis or how many chassis will fit in a rack but the amount of power the blades will draw and the amount of heat they will generate.
Too hot to handle
Pack them in above a certain density – usually expressed in terms of kW/rack – and the data centre's cooling systems may not be able to cope, or the power draw may be more than the local utility can supply.
Some companies have even gone back to using individual rack-mounted servers
Enclosing lots of high-powered servers in a box makes the correct functioning of the chassis cooling system particularly critical. It can also mean additional costs if you need to install chillers or pay the electricity supplier to install extra power lines.
Some companies have even gone back to using individual rack-mounted servers because the cost of new chillers and the space they take up negate the advantages of blades.
Blades don't fit every application, especially those that demand high volumes of CPU and memory. Such tasks are often transaction-heavy or driven by virtualisation, which consolidates multiple servers into a single box.
Blades typically don't have the space to house the additional hardware that such applications demand. And transaction-heavy applications are proving slow to move into virtual machines from high-powered systems, which many enterprise IT departments still see as the best choice to host them.
So blade servers are not an unalloyed blessing. They have advantages and they have their place within the enterprise IT system.
However, buyers need to be aware that the costs may be higher than they thought at first, and they need to be sure that they have taken all the variables into account before signing that cheque. ®
I had already worked out the general details from reading the reg for nearly 10 years, but seeing as I never have to deal with exactly this area of IT (I'm in software), I really appreciate that El Reg takes the time to define and illustrate a rather common kind of occult technology. I assume it's not exhaustive, nor even entirely accurate but who cares! I am definitely better-off than "none the wiser" after reading this article. Another please!
Cluster two blades, if one fails a spare can be automatically deployed and brought online with no service loss.
Deployed 2 full racks of HP P class blades ages ago. Single biggest issue was heat.
learned many lessons from that installation. Including the most important one - Hammer out details before you deploy. (something the management have trouble with still)
Blades are great for utility computing - - we've gotten away from one blade one app deployments, we now manage them just like any other system, add apps to utilize all the resources .. .and you can put some serious resources on both HP's and IBM's newer blades. One territory we're now looking at is using blades + virtualization to deploy smaller apps. This is producing some very nice utilization numbers and fairly good ROI.
With tools like cfengine and RH kvm and/or VMware esxi, Altiris for winders etc we can roll out substantial application installations in days, given that the hardware was rolled out with solid planning behind it.
I would not suggest blades to every computing environment, but in large enough enterprises with sufficient standardization of deployment requirements they can reduce turnaround and complexity resulting in improved ROI. It DOES need solid planning and architecture behind it though.