Rackable stays horizontal with x64 servers
Now with more density
Rackable Systems might be a niche player in the server racket, but the company's server engineering has allowed it to stay in business since 1999 and still, in many ways, set the pace for density in the data center. Today, the company revved its 2U rack servers, dubbed the C2005.
Unlike the commercial blade server and chassis designs from Hewlett-Packard, IBM, Dell, and a few other tier-one server makers that have only nominal market share in blades (you know who you are, Sun Microsystems, Fujitsu-Siemens, Hitachi, and NEC), Rackable's servers mount horizontally in racks that have servers in both the back and the front of the rack.
Each Rackable machine is half as deep as a standard rack server. Rackable drives server density back-to-front instead of by packing lots of skinny servers vertically in a blade chassis and then stacking chassis on chassis. Either approach - half-depth rack or blade - requires plenty of engineering to cram the features of a standard two-socket server into what amounts to half the space or less.
The neat bit about Rackable's designs is that using half-depth rack servers in the back and front of a rack creates a kind of chimney in the middle of the rack, which lets cold air for cooling the iron to be pulled in from the data center aisles and then sucked out through the center of the rack in a manner that does not create hot and cold spots in the data center. (Yes, data centers have what can be called weather).
This is a very clever, and devilishly simple, design concept. The wonder is that more companies don't make rack servers like this. It all comes down to volume economics and the profit margins that come from sticking with full-depth rack motherboards (which is cheaper than doing engineering) or by creating custom blade boards that fit into standard racks. Every server company makes its choices and the market decides.
With the C2005 rack servers, Rackable is making the top and bottom of the 2U rack server independently configurable, with four different options on the top of the server for disk storage and two different options for the front, for eight unique possible configurations. On the top section, customers can choose to have four 3.5-inch disks, eight 2.5-inch disks, a mix of four 2.5-inch disks and two 3.5-inch disks, or two 3.5-inch disks with space on the right hand side for five low-profile PCI slots.
If customers don't need the expansion slots, they can put in a DVD drive and an internal 3.5-inch drive in the space at the bottom of the server case, and they can also put one 3.5-inch or two 2.5-inch disks behind the service processor's LCD display on the front of the server, which folds out to reveal the drives. The machines that don't have the five extra PCI slots have one PCI slot on a riser board coming off the motherboard. The C2005 supports up to ten 2.5-inch drives and up to five 3.5-inch SAS or SATA-II drives.
Like other current Rackable machines, the C2005 supports SSD drives from Intel, specifically the 32 GB and 64 GB enterprise-class drives (the X25-E in the Intel catalog) for high IOPS and write environments as well as the 80 GB and 160 GB SSD drives Intel has put out for low-write environments (these are the X25-M drives).
I think the market needs both approaches
and both approaches can be implemented in good or bad ways. It really all comes down to budgets, time to deliver working systems and the integration of an organisation's server & LAN/SAN teams.
For my company the ability to pre-fill racks with relatively-cheap empty blade chassis, power them up, run the very few LAN/FC cables required back to the central switches, pre-config the switches and slam in blades as they're needed/delivered outweighs the 'cable-as-you-go' approach that I believe Rackable kit generally uses. This is obviously because there's a lot of latency in my company between the server and LAN/SAN teams and this isn't a problem at smaller or more integrated companies.
Also, and I'm happy to be wrong here, I'm pretty sure you can get more blades into the same space than rackable (HP C-class=160 servers/1920 cores in a 50U rack, Rackable=100/1200 cores).
Quote: The wonder is that more companies don't make rack servers like this.
Nope: http://www.icp-epia.co.uk/index.php?act=viewCat&catId=91 Fits any mini-itx motherboard. It does not have the fancy features of rackable, but it allows similar densities when used with any of the mini-itx MBs out there. It is also dirt cheap. You can fit these back-to-back in any 19" rack.
Rackable still impressive.
"....and a few other tier-one server makers that have only nominal market share in blades (you know who you are, Sun Microsystems, Fujitsu-Siemens, Hitachi, and NEC)...." Oh dear, the Sunshiners are going to take you off their Christmas card lists for that!
".....The fact is, if IBM and HP blade customers want to add a reasonable amount of storage to their blades, they have to rip out about half the blades in the boxes and buy storage modules that link to the blades....." Hmmm, but if you have a lot of blades in one datacenter then you're also likely to have a SAN, and HP and IBM (and Dell) are all better placed with storage offerings and SAN switch interconnects than Rackable. Even without the SAN, HP can offer storage blades and even NAS appliance blades inside the chassis, which with Rackable are either not an option or the customer has to build after delivery.
Still, Rackable still make very good kit, and it's beacuse they have innovated new designs that meet evolving customer requirements that they have not just survived but prospered (someone may want to point this out to a certain Mr Schwartz....).