Feeds

IBM turns back on server history

To give and to hybrid

Next gen security for virtualised datacentres

As odd as this may seem, IBM is not thinking about servers any more. Well, not in the way you might think.

According to the top minds at Big Blue's Systems and Technology Group - which designs and sells its servers, processors, and storage - the future is not about making particular server architectures do jobs and fight for market share against other servers. That's so 1980s and 1990s. The future, it seems, will be about creating hybrid systems comprised of different architectures, mixing multiple compute, storage, and networking elements together and tuning them for very specific jobs. Sometimes, I would guess, on the same processor complex, sometimes in loosely coupled systems.

IT vendors always have a theme, because they have to tell budget-time stories to CEOs and presidents who don't necessarily know a lot about computers other than how expensive and cranky they can be. (Actually, that's the system administrators, but that is a different story...)

For a long time, IBM banged on the On Demand drum, with a pretty tight focus on making IT more flexible, more like a utility in that you turn it on and off as you need it and only pay for what you use. These days, IBM is aiming for a lot broader a market than On Demand with something called Dynamic Infrastructure. Having pretty much worn out the On Demand song and dance and wanting to get its fingers into a whole lot more pies, IBM's Dynamic Infrastructure is about instrumenting and automating all kinds of infrastructure, not just computing.

As a consequence, Tom Bradicich, vice president of technology for the x64 server business within System and Technology Group at IBM, has to lead in with talk about how 2 billion people will be connected to the Internet by 2011 with trillions of objects - cars, trucks, tractors, roads, bridges, pipelines, and electric grids as well as people and their myriad devices - all linked in too. "The world might be getting smaller and flatter, but we believe that is has to get smarter, too," says Bradicich.

And it ain't exactly stupid right now for IBM to position itself to get a bigger piece of the government action, as it has most certainly been doing in the lead up and the passing of the Obama administration's stimulus plan. But Dynamic Infrastructure is about more than IBM getting into more government budgets to help meter and automate various kinds of physical infrastructure with computing and networking technology. It is about wringing efficiencies out all of these different infrastructure systems, to cut down on waste because it is increasingly clear that we can't afford - either environmentally or economically - to waste anything any longer.

So what does this have to do with x64 servers? On the surface, not much. Not until you look at the scale of computing that IBM thinks is necessary to create this smart infrastructure world. Bradicich, who spearheaded the design of IBM's EXA family of chipsets for its high-end x86 and x64 servers for many years, says that commodity x86 and x64 systems have increased their performance by a factor of two every two years or so and that over the course of the next ten years, the normal way of doing things - shrinking chips, cranking clocks, adding cores, and adding features that used to be out on motherboards - will deliver a 30 times improvement in commodity system performance. That sounds like a lot, but apparently it isn't.

"We believe we will need a 100 to 1,000 times improvement in performance to solve problems, such as doing a full body CT scan in real-time, or fast rendering of movies, or modeling traffic patterns on a city scale, just to name a few," says Bradicich.

"And that means the server of the future is not a machine with just faster memory or better packaging. Integrating switches and other features of the network will not get us beyond the 2X performance improvement per year. I mean, it is possible to play Handel's Messiah with 100 accordions or 100 trumpets, but to really get the full effect, you need an orchestra. In our experience, it has never been wise to say that one size fits all."

Gartner critical capabilities for enterprise endpoint backup

Next page: Hybrid Model

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
Cutting cancer rates: Data, models and a happy ending?
How surgery might be making cancer prognoses worse
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Top 8 considerations to enable and simplify mobility
In this whitepaper learn how to successfully add mobile capabilities simply and cost effectively.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?