IBM goes Lego with Ice Cube storage, server prototypes
Live and Let Die
IBM's Almaden Research Center has built a prototype of a storage and server packaging that may portend how we plug together IT components in the not-too-distance future, writes Timothy Prickett Morgan.
The prototype was developed under the code-name Ice Cube at the Almaden facility, where IBM does research primarily related to storage, and is called Collective Intelligent Bricks for the outside world. While this is a boring name, the prototype has two aspects that may make it very appealing as a commercial product: it cuts costs a lot, and it looks cool.
The latter factor touches upon the little-discussed machismo culture which afflicts the IT business (think about all that concern about performance) and it may be important. This could differentiate future IBM storage and server products from other vendors, much as the iMac design breathed life into Apple, or as Cray did for supercomputer design from the late 1970s and 1980s (you know the ottomans I'm talking about) made the machine readily identifiable and appealing when looking into the glass house.
Live and Let Die
The Ice Cube design is based on a few simple ideas.
- Storage or server modules should completely eliminate the spaghetti of wiring in use today, to create rack-mounted servers and link them to their storage, to each other, and to their clients.
- The units should be compact and able to be stacked in two dimensions, perhaps covering a wall like a rack of servers today, or in three dimensions in a cube formation, which allows for a lot of hardware to be packed into a very small space.
- These computer and storage modules should be so inexpensive that if one of them dies, no one cares; the network has the data on them backed up and just ignores the dead unit and keeps on doing the collective work that clients require.
This last item is important, as it would be a real pain to remove a storage node from the center of a giant cube, and also because 50 per cent of service calls on IT equipment are made after a prior service call failde to resolve a problem, according to Moidin Mohiuddin, senior manager of storage systems at the IBM Almaden facility.
The central tenet of Ice Cube is "live and let die." IBM wants to make the Ice Cube nodes self-healing and self-administering because, according to Mohiuddin, less than one-third of the total cost of a storage system comes from its hardware and software; more than two-thirds of the cost of a machine is devoted to management and administration, which basically equates to keeping cranky machines from keeling over and dying. If you don't care that a server or storage array dies, then you don't have to pay to revive it. You replace it at your convenience.
Tales from the Water Cooler
The Ice Cube design consists of a cubicle computing component that has a special circular iSCSI and Ethernet interconnection port offset from the center of each face of an individual Ice Cube, which is about the side and shape of a car battery. The iSCSI port is used to link the Ice Cube units to each other, and the Ethernet port connects the outside cubes to clients.
Inside each cube in the current prototype, which was set up as a storage array but could just as easily be used as a compute engine, a server with a Pentium-class processor is equipped with main memory that is used as disk cache, a dozen hard drives, and an eight-port Gigabit switch.
The prototype Ice Cube uses 80GB ATA disks, the kind used in desktop machines. While SCSI disks are faster and have lower mean times between failure, SCSI disks are also three times as expensive. That gives each brick a capacity of 960GB, and a 3x3x3 cube of Ice Cubes linked together a capacity of 25.9TB. With 120GB ATA drives, the same cube could hold 39TB of data.
This is a lot of disk capacity to pack in one area, and while Ice Cube may look cool, it most definitely is not cool in terms of temperature. This is why IBM has had to return to water cooling (rather than the air cooling of current rack servers and storage arrays) with the Ice Cube designs.
The Ice Cube prototype has skinny cooling towers that come up from a platform and rise up through the stacked cube components, which have one corner notched in such a way that all the faces of the cubes can still meet snuggly. Heat is not the only issue that will determine how densely the Ice Cubes can be stacked - weight is also a factor. These will be very, very heavy machines when stacked in cubes with eight or ten cubes per side.
IBM is working to improve the way the Ice Cubes link to each other to form their collectives, and has also designed a "super RAID" algorithm to protect data, an important factor given that the ATA drives can and will fail in the hot environment that the Ice Cube creates. Inside each cube, the 12 disk drives use a standard RAID 5 algorithm to protect data in the event that a disk drive fails. (One or more drives could be left as a spare in the event that happens).
An added layer of data protection is created by mirroring data across multiple nodes. IBM has tested the Ice Cube running anywhere from one to four copies of the primary nodes, but Mohiuddin says that the Almaden researchers are working on new algorithms that may allow as little as 20% redundancy in the Ice Cube complex and still provide adequate data protection.
IBM plans to develop compute nodes based on the Ice Cube design, and it is obvious that compute and storage nodes would be intermixed in a single array. According to Mohiuddin, Gigabit Ethernet is not fast enough for practical use, and that they are looking at 10 Gigabit Ethernet, InfiniBand, and other interconnection schemes for the cubes to make the latencies between cubes smaller.
These latencies will determine the scalability of a single cube, so speed is important, particularly as failing components are left to die and requests for information formerly stored on them or requests for computing are routed to other, presumably more distant cubes. Mohiuddin also stressed that the Ice Cube project was an early prototype, and that the Almaden team has not even been approached by the IBM product people yet to see if it can be commercialized.
Sponsored: Optimizing the hybrid cloud