Feeds

Tukwila Itanium delay situation still as clear as mud

Poulson and Kittson allegedly still on track

Choosing a cloud hosting partner with confidence

Two weeks ago, when Intel once again delayed its quad-core "Tukwila" Itanium processors until early 2010, the company did not give much insight into what the delay was about. It also said nothing about how the continuing delays with Tukwila would affect future Itanium processor rollouts.

Like many of you, I have been trying to comprehend what the nature of the latest delay really is.

I understand why Intel put off the Tukwila launch back in February. This was done as the Tukwila chip was to get a processor socket that was different from the current dual-core "Montecito" and "Montvale" Itanium 9100s, and also different from the future "Poulson" and "Kittson" Itaniums. Itanium server suppliers balked at the idea of supporting a new socket for one generation, so Intel went back to the drawing board to tweak the Tukwila chip so it would fit into the socket to be used by Poulson and Kittson.

Why this wasn't done in late 2007 when the Montvale Itaniums were launched, and people were already guessing that Intel would not make its anticipated delivery of Tukwila in the second half of 2008, is anyone's guess.

It is worth remembering that, in 2005, when all Intel had were single-core "Madison" Itaniums, Intel was saying that Montecito would be out in 2005, Montvale would follow fast on its heels in 2006, and Tukwila would debut in 2007. It also suggested that it would have a range of low-voltage parts too, as standard parts for two-socket and four-socket servers.

It is also worth remembering that making a processor, especially one as complex as Tukwila, is a messy and increasingly expensive bit of work. The stakes are high and mistakes are quite likely deadlier than delays.

With the eight-core "Nehalem EP" Xeon 7500s now being delivered late in 2009 and shipping in systems in early 2010, and both Tukwila and Nehalem EP processors using Intel's "Boxboro" chipset and its related QuickPath Interconnect, it was reasonable to think that the chipset rather than the chips that was somehow the cause of the problem.

Not so, says Alan Priestley, enterprise marketing manager for Intel Europe. The Boxboro chip is not the issue. "It was a change to the processor relating to scaling on certain workloads," Priestley explained, echoing the terse comments that Intel made when it pushed Tukwila out on May 21, and adding that it was related to "heavily threaded and data intensive workloads."

I have no idea what the feature change is that Intel is working on, but I would not be surprised to see something I will call QP Assist. This week, Advanced Micro Devices launched its six-core "Istanbul" Opteron 2400 and 8400 processors. They include a feature called HT Assist, which is short for HyperTransport Assist.

What this feature does is relatively simple. In SMP machines, processors in the chip complex are able to get data from the L3 caches in their neighbours, which is obviously a lot quicker than going out to main memory or disk drives for the data. In the past, a chip looking for data from its neighbours had to broadcast a request to them all, asking if any of them have the data. With HT Assist, AMD has carved out 1 MB of the 6 MB L3 cache on each chip to use it as an L3 cache line directory for the other caches in the complex.

Secure remote control for conventional and virtual desktops

More from The Register

next story
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
DEATH by COMMENTS: WordPress XSS vuln is BIGGEST for YEARS
Trio of XSS turns attackers into admins
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
prev story

Whitepapers

Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
How to determine if cloud backup is right for your servers
Two key factors, technical feasibility and TCO economics, that backup and IT operations managers should consider when assessing cloud backup.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.