IBM turns back on server history
To give and to hybrid
To that end, IBM's top techies are working on hybrid computer systems that will employ a mix of the following:
- General purpose systems: More or less akin to the standard x64 machines we have today
- Domain-specific application accelerators: Created and optimized for very specific jobs. Think of the economic modeling that drives financial trading systems, where interest rates change 7 to 10 times per second, but models can't predict effects of those changes in real-time to decide what to do as they are changing
- Compute-intensive acceleration: Think of all those vector math co-processors inside the Power6 or Cell processors - and add some steroids
- High-speed, network traffic optimization: Allows hybrid components to talk to each other at high speed and to interface with the outside world
This sounds pretty vague, but when companies talk about the future, as IBM sometimes does, they don't want to give too much away. As a matter of fact, the IBM that I know doesn't want to give anything away.
One of the things that IBM is working hard at, according to Bradicich, is making virtualization on servers smarter. "Anyone can move a virtual machine," he says. "But imagine if the system had a coach that could tell you the best way to do it under adverse conditions. To use a football analogy, great quarterbacks are often great not just because they can throw the ball, because they have great coaches that tell them where to throw the ball, and when." Hence, IBM is worrying about adding intelligence to workload management on these future systems.
Such hybrid systems have been successful in a number of cases. IBM has sold a hybrid Cell-Opteron blade cluster to Los Alamos National Lab that has in excess of 1 petaflops of performance, making it the fastest supercomputer in the world at the moment. Hoplon Infotainment, a Brazilian company, has deployed its Taikodom multiplayer game on a hybrid cluster marrying IBM mainframes with Cell blade servers.
The problem with hybrid systems, which will deploy many different types of components, is that you lose economies of scale even as you gain economies of scope. And that has a dramatic effect on the economics of the server, er, systems business. Getting that dramatic 100X to 1,000X performance improvement over the next decade may see a slackening in the price/performance curve. But there may be no other option, if IBM is right. ®
Sponsored: Network DDoS protection