Original URL: https://www.theregister.com/2009/03/27/ibm_hybrid_systems/

IBM turns back on server history

To give and to hybrid

By Timothy Prickett Morgan

Posted in Channel, 27th March 2009 22:40 GMT

As odd as this may seem, IBM is not thinking about servers any more. Well, not in the way you might think.

According to the top minds at Big Blue's Systems and Technology Group - which designs and sells its servers, processors, and storage - the future is not about making particular server architectures do jobs and fight for market share against other servers. That's so 1980s and 1990s. The future, it seems, will be about creating hybrid systems comprised of different architectures, mixing multiple compute, storage, and networking elements together and tuning them for very specific jobs. Sometimes, I would guess, on the same processor complex, sometimes in loosely coupled systems.

IT vendors always have a theme, because they have to tell budget-time stories to CEOs and presidents who don't necessarily know a lot about computers other than how expensive and cranky they can be. (Actually, that's the system administrators, but that is a different story...)

For a long time, IBM banged on the On Demand drum, with a pretty tight focus on making IT more flexible, more like a utility in that you turn it on and off as you need it and only pay for what you use. These days, IBM is aiming for a lot broader a market than On Demand with something called Dynamic Infrastructure. Having pretty much worn out the On Demand song and dance and wanting to get its fingers into a whole lot more pies, IBM's Dynamic Infrastructure is about instrumenting and automating all kinds of infrastructure, not just computing.

As a consequence, Tom Bradicich, vice president of technology for the x64 server business within System and Technology Group at IBM, has to lead in with talk about how 2 billion people will be connected to the Internet by 2011 with trillions of objects - cars, trucks, tractors, roads, bridges, pipelines, and electric grids as well as people and their myriad devices - all linked in too. "The world might be getting smaller and flatter, but we believe that is has to get smarter, too," says Bradicich.

And it ain't exactly stupid right now for IBM to position itself to get a bigger piece of the government action, as it has most certainly been doing in the lead up and the passing of the Obama administration's stimulus plan. But Dynamic Infrastructure is about more than IBM getting into more government budgets to help meter and automate various kinds of physical infrastructure with computing and networking technology. It is about wringing efficiencies out all of these different infrastructure systems, to cut down on waste because it is increasingly clear that we can't afford - either environmentally or economically - to waste anything any longer.

So what does this have to do with x64 servers? On the surface, not much. Not until you look at the scale of computing that IBM thinks is necessary to create this smart infrastructure world. Bradicich, who spearheaded the design of IBM's EXA family of chipsets for its high-end x86 and x64 servers for many years, says that commodity x86 and x64 systems have increased their performance by a factor of two every two years or so and that over the course of the next ten years, the normal way of doing things - shrinking chips, cranking clocks, adding cores, and adding features that used to be out on motherboards - will deliver a 30 times improvement in commodity system performance. That sounds like a lot, but apparently it isn't.

"We believe we will need a 100 to 1,000 times improvement in performance to solve problems, such as doing a full body CT scan in real-time, or fast rendering of movies, or modeling traffic patterns on a city scale, just to name a few," says Bradicich.

"And that means the server of the future is not a machine with just faster memory or better packaging. Integrating switches and other features of the network will not get us beyond the 2X performance improvement per year. I mean, it is possible to play Handel's Messiah with 100 accordions or 100 trumpets, but to really get the full effect, you need an orchestra. In our experience, it has never been wise to say that one size fits all."

Hybrid Model

To that end, IBM's top techies are working on hybrid computer systems that will employ a mix of the following:

This sounds pretty vague, but when companies talk about the future, as IBM sometimes does, they don't want to give too much away. As a matter of fact, the IBM that I know doesn't want to give anything away.

One of the things that IBM is working hard at, according to Bradicich, is making virtualization on servers smarter. "Anyone can move a virtual machine," he says. "But imagine if the system had a coach that could tell you the best way to do it under adverse conditions. To use a football analogy, great quarterbacks are often great not just because they can throw the ball, because they have great coaches that tell them where to throw the ball, and when." Hence, IBM is worrying about adding intelligence to workload management on these future systems.

Such hybrid systems have been successful in a number of cases. IBM has sold a hybrid Cell-Opteron blade cluster to Los Alamos National Lab that has in excess of 1 petaflops of performance, making it the fastest supercomputer in the world at the moment. Hoplon Infotainment, a Brazilian company, has deployed its Taikodom multiplayer game on a hybrid cluster marrying IBM mainframes with Cell blade servers.

The problem with hybrid systems, which will deploy many different types of components, is that you lose economies of scale even as you gain economies of scope. And that has a dramatic effect on the economics of the server, er, systems business. Getting that dramatic 100X to 1,000X performance improvement over the next decade may see a slackening in the price/performance curve. But there may be no other option, if IBM is right. ®