Feeds

Intel Itanium to make moves on mainstream

High-end 64-bit databases given MMX, SSE support -- surely some mistake...

  • alert
  • submit to reddit

Intelligent flash storage arrays

Intel today relaunched its IA-64 architecture, moving the processor once codenamed Merced but now officially branded Itanium much, much closer to the mainstream desktop PC market than Chipzilla has previously suggested. At last year's Microprocessor Forum, Intel spokesfolks poo-pooed claims that Merced was anything more than a high-end architecture aimed at big league, 64-bit applications, specifically databases and operating systems. Not anymore, it isn't. This time round, Intel's principal engineer and IA-64 microarchitecture manager, Harsh Sharangpani, clearly positioned Itanium very much in the mainstream "commercial" server and workstation markets, stressing the chip's Epic (Explicitly Parallel Instruction set Computing) architecture's benefits for everything from digital content creation to encryption and security roles for e-commerce and other "Internet applications" (Web browsing?). In essence, then, Itanium will push right down into the space currently occupied by Intel's Xeon line, possibly even to the extent of supplanting it. And for anyone who thinks that's as far as it goes, Itanium will offer full MMX and Screaming SIMD Extensions (SSE) support in addition to full IA-32 compatibility. That was always part of the Merced gameplan, but Intel does appear to have upgraded its x86 support to something more than mere instruction set emulation. The Pentium family has for a long time supported x86 only as a kind of abstraction, decoding at runtime x86 instructions into something more akin to Risc. Itanium will take the same approach, but unlike the original Merced spec., IA-32 code will be run by the IA-64 execution core itself. That, Sharangpani promised, will ensure "full Itanium performance on IA-32 system functions". Much of that performance -- at least at the 64-bit level -- will come not from the chip per se, but from highly complex compilers turning source code into object code structured to play to Itanium's architectural strengths. Enhanced compiling was always part of the Merced strategy, but it's clear from Sharangpani's comments that to get the full benefit of the new CPU, software developers will need very highly tailored compilers indeed to keep the chip's streamlined pipeline fed with instructions and data. It certainly sounds as if Intel has stripped away much of the functionality a modern processor uses to get as many operations as it can to run in parallel and to make sure it follows the right branches in program code in the process. Instead, compilers will figure most of this out before the code ever gets to see a processor that can run it. Not that the chip doesn't perform some dynamic optimisation at runtime -- there's some powerful branch prediction work going on within the Itanium core and plenty of code pre-fetching -- but it's clear the focus is very much on fine-tuning software before it's run: the compiler is expected to build hints into the code telling the processor which chunks of instructions to pre-fetch. Intel has followed the now familiar path of placing the L2 cache on the die and supporting off-chip L3 cache (up to 4MB) on the processor daughtercard. How fast the L3 cache or, for that matter, the Itanium itself, will operate, Sharangpani refused to say. In fact, for a Forum presentation rather longer than is usually allowed, Sharangpani gave away very little solid data, sticking solely to the more technical elements of the chip's operation. Itanium will go into production "mid-2000", but with testing and tuning taking place throughout the first half of next year, whether that's volume production will remain to be seen. Sharangpani made much of what he called "strong progress on initial silicon", but making progress (no matter how good) isn't the same thing as finishing it. ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
MI6 oversight report on Lee Rigby murder: US web giants offer 'safe haven for TERRORISM'
PM urged to 'prioritise issue' after Facebook hindsight find
Assange™ slumps back on Ecuador's sofa after detention appeal binned
Swedish court rules there's 'great risk' WikiLeaker will dodge prosecution
NSA mass spying reform KILLED by US Senators
Democrats needed just TWO more votes to keep alive bill reining in some surveillance
'Internet Freedom Panel' to keep web overlord ICANN out of Russian hands – new proposal
Come back with our internet! cries Republican drawing up bill
What a Mesa: Apple vows to re-use titsup GT sapphire glass plant
Commits to American manufacturing ... of secret tech
prev story

Whitepapers

Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
How to determine if cloud backup is right for your servers
Two key factors, technical feasibility and TCO economics, that backup and IT operations managers should consider when assessing cloud backup.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.