Feeds

IBM turns back on server history

To give and to hybrid

Boost IT visibility and business value

As odd as this may seem, IBM is not thinking about servers any more. Well, not in the way you might think.

According to the top minds at Big Blue's Systems and Technology Group - which designs and sells its servers, processors, and storage - the future is not about making particular server architectures do jobs and fight for market share against other servers. That's so 1980s and 1990s. The future, it seems, will be about creating hybrid systems comprised of different architectures, mixing multiple compute, storage, and networking elements together and tuning them for very specific jobs. Sometimes, I would guess, on the same processor complex, sometimes in loosely coupled systems.

IT vendors always have a theme, because they have to tell budget-time stories to CEOs and presidents who don't necessarily know a lot about computers other than how expensive and cranky they can be. (Actually, that's the system administrators, but that is a different story...)

For a long time, IBM banged on the On Demand drum, with a pretty tight focus on making IT more flexible, more like a utility in that you turn it on and off as you need it and only pay for what you use. These days, IBM is aiming for a lot broader a market than On Demand with something called Dynamic Infrastructure. Having pretty much worn out the On Demand song and dance and wanting to get its fingers into a whole lot more pies, IBM's Dynamic Infrastructure is about instrumenting and automating all kinds of infrastructure, not just computing.

As a consequence, Tom Bradicich, vice president of technology for the x64 server business within System and Technology Group at IBM, has to lead in with talk about how 2 billion people will be connected to the Internet by 2011 with trillions of objects - cars, trucks, tractors, roads, bridges, pipelines, and electric grids as well as people and their myriad devices - all linked in too. "The world might be getting smaller and flatter, but we believe that is has to get smarter, too," says Bradicich.

And it ain't exactly stupid right now for IBM to position itself to get a bigger piece of the government action, as it has most certainly been doing in the lead up and the passing of the Obama administration's stimulus plan. But Dynamic Infrastructure is about more than IBM getting into more government budgets to help meter and automate various kinds of physical infrastructure with computing and networking technology. It is about wringing efficiencies out all of these different infrastructure systems, to cut down on waste because it is increasingly clear that we can't afford - either environmentally or economically - to waste anything any longer.

So what does this have to do with x64 servers? On the surface, not much. Not until you look at the scale of computing that IBM thinks is necessary to create this smart infrastructure world. Bradicich, who spearheaded the design of IBM's EXA family of chipsets for its high-end x86 and x64 servers for many years, says that commodity x86 and x64 systems have increased their performance by a factor of two every two years or so and that over the course of the next ten years, the normal way of doing things - shrinking chips, cranking clocks, adding cores, and adding features that used to be out on motherboards - will deliver a 30 times improvement in commodity system performance. That sounds like a lot, but apparently it isn't.

"We believe we will need a 100 to 1,000 times improvement in performance to solve problems, such as doing a full body CT scan in real-time, or fast rendering of movies, or modeling traffic patterns on a city scale, just to name a few," says Bradicich.

"And that means the server of the future is not a machine with just faster memory or better packaging. Integrating switches and other features of the network will not get us beyond the 2X performance improvement per year. I mean, it is possible to play Handel's Messiah with 100 accordions or 100 trumpets, but to really get the full effect, you need an orchestra. In our experience, it has never been wise to say that one size fits all."

The essential guide to IT transformation

Next page: Hybrid Model

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Flash could be CHEAPER than SAS DISK? Come off it, NetApp
Stats analysis reckons we'll hit that point in just three years
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
Object storage bods Exablox: RAID is dead, baby. RAID is dead
Bring your own disks to its object appliances
Nimble's latest mutants GORGE themselves on unlucky forerunners
Crossing Sandy Bridges without stopping for breath
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.