Feeds

Tukwila Itanium delay situation still as clear as mud

Poulson and Kittson allegedly still on track

Secure remote control for conventional and virtual desktops

So now, when an application needs data, it can go to its own cache and look up where it is, meaning it doesn't have to broadcast requests for L3 data to the other chips. This cuts down on overhead pretty significantly for memory intensive workloads, and the beauty of the way AMD has implemented it is that it is done in the system BIOS, not in the chip.

No one has said that this is what Intel is doing, mind you. Don't get the wrong idea. But the company has copied damned near every other good idea AMD has had in its Opteron chips, so why not this one, too?

In the past, delays of one generation of Itanium processors have had dramatic effects on the schedules for future Itaniums, but Priestley says that this year's two Tukwila delays have not had any impact on the delivery of Poulson and Kittson Itaniums. I find this hard to believe, but if it is true, it means Tukwila will have a relatively short life, at least compared to Madison, Montecito, and Montvale Itaniums.

Of course, Intel has not been specific about when Poulson and Kittson would be delivered. At the Intel Developer Forum last August, Pat Gelsinger, general manager of Intel's Digital Enterprise Group, said merely that Poulson chips would be based on a new microarchitecture and would use a 32 nanometer process. (Tukwila is implemented in a 65 nanometer process, and Intel is skipping the current 45 nanometer processes used to make Nehalem Xeons and Core i7s for the Itanium line.)

Last August, there was some idle chatter at IDF saying that - given the then pretty substantial delays with Tukwila - Poulson might come to market in late 2009, possibly with four cores, or possibly with six or even eight cores. At the time, Tukwila was expected for shipment by late 2008 and to be shipping in machines in early 2009. Expecting Poulson in 2009 was silly, particularly since the 32 nanometer chip making processes wouldn't be ramped until 2010 and Intel leads processes with Core and Xeon processes. It was reasonable to expect Poulson in late 2010, perhaps, but not late 2009. And Kittson, given the two-year cadence of Itanium chips (when the roadmap is not derailed), would reasonably have been expected in late 2012 or so.

Intel, of course, is not silly enough to give dates for future Itaniums anymore, so it is hard to say if Poulson and Kittson are on track or not. Only the OEMs know for sure.

We'll see what happens. ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
Linux? Bah! Red Hat has its eye on the CLOUD – and it wants to own it
CEO says it will be 'undisputed leader' in enterprise cloud tech
Oracle SHELLSHOCKER - data titan lists unpatchables
Database kingpin lists 32 products that can't be patched (yet) as GNU fixes second vuln
Ello? ello? ello?: Facebook challenger in DDoS KNOCKOUT
Gets back up again after half an hour though
Hey, what's a STORAGE company doing working on Internet-of-Cars?
Boo - it's not a terabyte car, it's just predictive maintenance and that
Troll hunter Rackspace turns Rotatable's bizarro patent to stone
News of the Weird: Screen-rotating technology declared unpatentable
prev story

Whitepapers

A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.