IBM smacks rivals with 5.0GHz Power6 beast
Then pours water on them
The rest of the server world can play with their piddling 2-3GHz chips. IBM, meanwhile, is prepared to deal in the 5GHz realm.
The hardware maker has unveiled a Power6-based version of its highest-end Unix server - the Power 595. The box runs on 32 dual-core 5GHz Power6 processors, making it a true performance beast. This big box completes a protracted roll out of the Power6 chip across IBM's Unix server line.
Along with the big daddy, IBM revealed a new water-cooled version of the Power 575 server dubbed the Hydro-Cluster. In addition, it refreshed the existing midrange Power 570 server.
IBM's top Power executives showed off the fresh gear during a customer and press event here in San Francisco. They wheeled out three Power customers who were thrilled to be part of IBM's Unix experience. We guess that a disgruntled Power user or two could not be located on short notice to provide balance.
The Power 595 ships in a massive cabinet that looks just like that of its predecessors, except IBM has added a few green touches to the case. This green reflects the environmentally friendly nature of IBM's hulking metal tower, we're told.
The Power 595, available on May 6, relies on a series of four-socket "books" or boards. You can fill a system with between one and eight boards, using both 4.2GHz and 5.0GHz chips. This monster can hold up to 4TB of DDR2 memory. You'll find the rest of the specifications here where IBM details the various options with its I/O drawers.
Usually, IBM will hit customers with a massive TPC benchmark score when it rolls out a new 595-class system - just to let HP know how much it cares. Apparently, the company is saving that gem for a later date, opting instead just to show how the Power 595 wallops HP's Itanium gear and Sun's SPARC systems on SAP and SPEC benchmarks. We're told that IBM's new system beats out the rivals by 2x to 3x. We thought it rather sporting of IBM to include Sun's gear in the benchmarks.
The Power 575 is a different type of high-end creature with IBM characterizing the system as a supercomputing machine. As mentioned, IBM has layered water-filled coils over each of the boards in the 575, allowing it to create a more dense design.
Customers can fit up to 14 2U boards in the huge 575 case with 16 4.7GHz dual-core chips per board. You'll also manage to outfit each board with up to 256GB of memory. The rest of the rather complex specifications are here.
According to IBM, the water-cooling can reduce typical data center energy consumption by 40 per cent when compared to air cooled 575s. In addition, the refreshed box offers up 5x the performance of older 575 systems. IBM has benchmarked a single 575 board at 600 GFlops.
The system will ship in May, running AIX or Linux.
The refreshed 570 still runs on 3.5-4.7GHz versions of Power6, just as it has done since last year. Now, however, customers can tap a "hot node" feature that lets them add additional systems to an already running box for extra horsepower and storage. IBM has shipped 8,000 of the systems to date. ®
Re: IBM's POWER Platform
David Vasta said "You can run a mix of OSes on it and before too long you will be able to run Intel based Linux "
Maybe can't run Linux/x86 yet - but you can certainly run Linux *programs* - see http://www-03.ibm.com/systems/power/software/virtualization/editions/lx86/
And before the old fogies like me chip up - yes I know this is strangely similar to the trick DEC did with WinNT on the Alpha.
Getting back to the P6 kit - what's the big deal over the water cooling? AFAIK it's only the '575 that's watercooled, although you can add a radiator door to IBM racks. Okay, given my limited experience with overclocking a PC, you still can't be cavalier about water+volts, but then again it's deionized water so it's also not water-leak=instant-death either.
Not sure about the greeness of this - okay you get "better" cooling than air (otherwise no overclockers would bother with H20) and you get a side product of warm/hot water (swimming pool anyone?!). On the other hand you definitely can have a lot less fans = better reliability (less components) and each fan uses/wastes power itself. I'm also guessing that watercooling makes it possible to pack these hot running systems more densely - so saving a little on floor space.
Got to say - I'd love to see how many virtualized environments a "full house" p595/p6 could support. (sheesh, I sound like a total nerd!)
Apologies if I sound like an IBMer - not my intention, just so nice to see someone continuing to push the boundaries...
(Paris because we're talking about hot bods here)
Will we have an iCooler next ?
The SunMD is a standard design with water cooling built in the factory, every SunMD is the same. No need for any local plumbers to change anything inside when a customer receives it. Just plug it into the power and external cooling pipes. When you buy a Sun Modular Datacenter (aka Blackbox), you do not not have to change anything. Put a water cooled server in a computer room and as explained many times, you need to do lots of extra work and ongoing complexity.
As far as I know all servers in a SunMD are air cooled, no servers are water cooled. But I am sure if someone paid Sun enough money we would be able to connect a water cooled P6 in the SunMD to the pipes. I would advise against this, guess why, it adds complexity.
SunMD is water cooling outside of the computer chassis/enclosure. The Power6 water cooled server is water cooling inside the chassis/enclosure.
My point is about the added complexity, which water causes if you have to put it into an existing datacenter.
Now if people want to make hot chips or more elegant designs then we technologists have a challenge. Produce a coolant that is safer to mix with electrical devices and those nice little towers that we put on CPUs can become a selling point, lets call it the iCooler. I remember that some of the IBM, Hitachi or Amdahl mainframes had those elegant circular tower heatsinks.
Well it was pretty but complex. Now maybe many overclockers like to cool their PC's at home with these type of things. The modders always like new gadgets and to spend time tweaking their systems. Commercial datacenters do not.
NB I built my latest PC with the criteria of least power usage, it is based on a AMD BE-2540 dual core cpu, $ per performance per power usage it was the most efficient. It may not be the fastest, but I was being Mr Sensible. No water anywhere near it.
In commercial datacenters I do not think we can overclock, mod and customize our servers with lights, water coolant towers etc. But maybe the first person to do so could make the datacenter into a work of art and a light show.
Me no iModder.
Water cooling and freedom of speech
Water cooling is much more efficient than air cooling, this is a fact and has been demonstrated here and in many other places It is also much more simple from a general point of view, as air cooling implies the need to clean (filter), move and chill enormous volumes of air. It adds overall complexity as the very local "simplification" in rack design implies open system, which need to be placed in "white rooms" to avoid contamination by airborne particles. The overall structural cost is necessarily much higher than for a closed water circuit.
Now I'm not saying that we shouldn't develop and favor non-heating (and power-saving) chips. But even those could benefit from well-thought water cooling. I'm especially thinking about desktops and laptops operating in non-controlled atmosphere (servers too, but who is stupid enough to keep their servers outside a white room? Oh.... sorry), all this dust accumulating everywhere is a real problem. Watts and water DO mix much better than dust and air-cooling. There is no reason why a well-designed water-cooling system would be a problem. The issue is "macro-technical", and quite easy to fix (besides, polish plumbers come cheap these days). "Fire and powder don't mix", still the "mixture" is widely used, from fireworks to space rocket propulsion. In the lab, we're happily mixing pressurized gasses, water, heavy watts,very delicate electronics, very toxic compounds and radioactive isotopes, all that in a place that would make a bachelor's kitchen look tidy. All clear, sir. No safety incident or leak reported in the past few years. We did have problems though: the computer monitoring the whole shebang froze in the middle of an important experiment because the air-cooled processor over-heated (dust accumulation, in spite of the filter). And we had to change the air-cooled power supply a couple of times (dust accumulation, in spite of the filter). Gimme water cooling please.
Besides, for applications that DO need heavy single-threaded processing power (yes, there are such things. My heavier computational needs are not easily split into parallel processes -but basic science might well be an exception), faster single-thread chips are a great thing. And air-cooling them would be an astronomical waste of energy.
To sum up my thoughts, low-power chips are the best, when they do the trick. But water-cooling them would still be even better. Reducing the issue to "water and electricity don't mix" is a silly attempt at mixing basic "home safety" advice with highly technical issues.
As for the freedom of speech, and me mentionning the US: freedom of speech is respected there indeed -till you start talking or writing about Al Quaeda or about filesharing or about kicking your prof's butt, even if you don't disclose the results of you elucubrations. Which is the kind of restriction that define the LACK of speech freedom (see the last few court decision about overall harmless dudes emailing bad poetry about Bin Laden, or the kid grounded for an undisclosed phantasmatic "hit list".) I could have mentionned the UK too. Goth teenage girls are really threatening these days! As for France, mother of the "Declaration des Droits de l'Homme", I guess that holding a sign reading "Niko, salaud, le peuple aura ta peau" whould lead you directly behind the bars, with the associated beating. Poor, poor western world.
Geek icon just because.