The Register® — Biting the hand that feeds IT

Feeds

Chin up, Intel heads - you can still lop legs off ARM's data-centre dash

First step, stop using words like 'reimagine'

Email delivery: Hate phishing emails? You'll love DMARC

Analysis A new CEO – Brian Krzanich – and president – Renee James – have taken the helm at chip giant Intel, although they have yet to articulate a grand vision for Chipzilla for the next decade.

But it looks like we may be getting some sense of Intel's long-term plans in the data centre at an event that the company is hosting on 22 July in San Francisco.

Intel has to get through its second-quarter financial results and its current quiet period ahead of those results before it can wax poetic about its server, storage, and networking master plan for partners to peddle to the CIOs of the world.

And once that is done, Diane Bryant, a senior vice president at Intel and general manager of its Datacenter and Connected Systems Group, will host an event with press and analysts where she will talk about general trends in the data centre.

Other Intel top brass will talk about cloud computing, big data, and high-performance computing, which are seeing a kind of architectural convergence and something that will probably be analysed in greater depth. The invite from Chipzilla also says there will be special sessions on low-power components in the data centre, HPC going mainstream, and how you transform enterprise IT with private clouds and big data.

Intel is going to tell us about the future of the data centre at the end of the month

Intel is going to tell us about the future of the data centre at the end of the month

These have, of course, been popular themes with Intel over the past several years, so it will be intriguing to see what new information and insight Chipzilla can bring to bear on the topic. It would have been interesting to see Krzanich talk about Intel's chip manufacturing prowess and plans and James talk about Intel's software business at this data centre event, and that could yet happen because the invitation's agenda is still preliminary at the moment.

With so many ARM server chip-makers champing at the bit, someone at Intel will have to address the competitive pressures that will start coming up from the bottom late this year and early next: a slew of 64-bit ARM processors, many of them with integrated networking or switch fabrics on their dies, will start appearing on the market with the ability to run Linux workloads.

Yes, Windows in its various forms is a major power in the operating system world and it doesn't look like Windows Server 2012 is going to get ARM support any time soon. But the hyperscale data centres that represent a large portion of Intel's business now can relatively easily switch processor architectures if they so choose because they largely run on Linux operating systems and for the most part run homegrown code.

This is precisely the situation back in the late 1990s when supercomputers were largely running flavours of Unix and had constellation, federated SMP, or vector processing schemes. Then Beowulf clustering for cheap x86 iron came along and basically ate the market, giving us the massively parallel supercomputer racket we know (and maybe love) today.

We may be in for another radical phase change, one where hyperscale data centre operators go with the more open architecture, one that allows for companies to license and alter chips much as they can download open source code and alter it to suit their needs. (And, if like Google, Amazon, and Facebook, they offer a service instead of a product, the licensing of open source software lets them keep their own modifications to themselves.)

Open up the x86 specification so companies can license x86 chips

El Reg has said it before, and we will say it again here. We don't think Intel can – or should – open up its wafer bakers to any and all comers who want to use its current 22-nanometer knowhow and future 14nm or 10nm chip-etching technologies. The way to keep the fabs warm and the profits coming in at Chipzilla is to open up the x86 architecture specification and allow companies to license x86 processor cores and other elements that wrap around it; this would allow paying partners to make the chips they want to put inside machines either for their own use or for resale.

This takes a page out of the playbook from ARM Holdings, and seeing how stealing the Opteron playbook from Advanced Micro Devices so successfully vanquished that long-time competitor (albeit with a much-improved implementation for the Xeons), this could be a good strategy for Intel. In fact, it may be the only one, and it will involve a lot more than the addition of a couple instructions or custom clock speeds or packaging that Intel has done for some large customers with its Xeon line. The big difference is that Intel could open up the x86 design and tie those customizations to using its chip tools and factories to control that process.

ARM Holdings cannot do that, and nor does it try to: the Brit biz designs processor cores for customers, such as Samsung and Broadcom, to build as they wish.

This open x86 strategy would go against everything that Intel has been doing for the past two decades as it has taken control of processors and chipsets used in the servers and storage and is now doing so with networking to a smaller degree. Intel wants to have as few modifications as possible to its chips and control the designs with an iron hand.

The other option, which Intel has no doubt contemplated, is do something along the lines of what AMD has done - which was to put most of its energy behind future ARM processors and use its lead in fabrication technology (something between one and two process nodes compared to Taiwan Semiconductor Manufacturing Corp) to bury the ARM competition.

It seems far more likely that Intel will take its Atom processors and gear them up to compete more effectively against ARM chips, as the future "Silvermont" architecture Atoms certainly do. In fact, it would not be surprising to see the core Atom computing unit become the centre of a new class of Xeon-ish processors. (Perhaps they could be called Argent?)

In effect, Atom becomes the new Xeon, and Xeon becomes the new Itanium. This kind of transition has happened before, you will remember, when the Core architecture, which was much more energy efficient, replaced the Pentium engines at the heart of the Xeons to become the new server processors – and basically saved Intel from being severely wounded by AMD.

Intel could take another route to fight ARM by doing an emulation layer (much as the Chinese government's Godson processors are MIPS with an x86 emulation mode that underpins the QEMU hardware emulator) so future x86 chips could run ARM code. This kind of thing always leads to sorrow, of course, but it keeps coming around on the guitar just the same. (Remember that x86 emulation mode on the early Itaniums?)

Whatever Intel talks about at this month's event, what it should make clear is how it will manage the overlap between Atoms and Xeons as low-end and high-end ARM chips enter the market to try to steal away some business.

And, if Itanium is truly dead, as it seems to be with the promised "Kittson" Itanium being nothing more than an Itanium 9500+ rev, then Intel should just come out and say it and get it over with. If there is a "reimagined" data centre, it is hard to imagine the Itanium chip is part of it, not the way Hewlett-Packard's HP-UX business is collapsing.

It is time to move on, and as much as the El Reg systems desk hates to see another server platform get mothballed, everyone suspects the real deal anyway - and no lawsuit between HP and Oracle is going to change that. Plans change, and they have to because markets do. Everyone understands that. ®

5 ways to reduce advertising network latency

Whitepapers

Microsoft’s Cloud OS
System Center Virtual Machine manager and how this product allows the level of virtualization abstraction to move from individual physical computers and clusters to unifying the whole Data Centre as an abstraction layer.
5 ways to prepare your advertising infrastructure for disaster
Being prepared allows your brand to greatly improve your advertising infrastructure performance and reliability that, in the end, will boost confidence in your brand.
Supercharge your infrastructure
Fusion­‐io has developed a shared storage solution that provides new performance management capabilities required to maximize flash utilization.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Avere FXT with FlashMove and FlashMirror
This ESG Lab validation report documents hands-on testing of the Avere FXT Series Edge Filer with the AOS 3.0 operating environment.

More from The Register

next story
Multipath TCP: Siri's new toy isn't a game-changer
This experiment is an alpha and carriers could swat it like a bug
Barmy Army to get Wi-Fi to the seat for cricket's Ashes
Sydney Test Match will offer replays to the smartmobe
Dedupe-dedupe, dedupe-dedupe-dedupe: Flashy clients crowd around Permabit diamond
3 of the top six flash vendors are casing the OEM dedupe tech, claims analyst
Disk-pushers, get reel: Even GOOGLE relies on tape
Prepare to be beaten by your old, cheap rival
Dragons' Den star's biz Outsourcery sends yet more millions up in smoke
Telly moneybags went into the cloud and still nobody's making any profit
Hong Kong's data centres stay high and dry amid Typhoon Usagi
180 km/h winds kill 25 in China, but the data centres keep humming
Microsoft lures punters to hybrid storage cloud with free storage arrays
Spend on Azure, get StorSimple box at the low, low price of $0
prev story