Seduced by the 'Megahertz Myth'
When NetBurst was introduced, the market had been taught to salivate when the high-clock-rate bell was rung. When we asked Pawlowski if Hyper Pipelined Technology and its high clock speeds was a marketing decision, he said, "It may have been a marketing decision, but that's what people bought at the time."
And the power required to goose those clock rates wasn't that big a problem at that time. "We were within a decent power envelope," Pawlowski told us. "The power envelope wasn't pushing 130 watts, maybe they were 40, 50, 60 watt parts." That said, he acknowledged that the message Intel wanted to send to the market was "'Hey, we've got the fastest gigahertz part'. That's what people were looking for."
There was also the fact, he admitted, that since the P6 architecture was such an improvement over P5, expectations for generation-to-generation performance improvements had been raised – including his own.
"When you get to the next part, you're kind of looking for 'How do we repeat history and do the same thing over and over again?'," he said. "You get spoiled, and you tend to get a little more aggressive, and you tend to think 'If this is important to me, then it must be important to the market'."
Unfortunately, the market had other ideas. "It wasn't until our customers said to us, 'We're not pushing socket power beyond 130 watts' – in the server space; in client it was certainly lower – 'We're not pushing that socket power any higher' that we had to have a wake-up call," he said.
There was an additional wake-up call, as well. "[AMD's] Opteron came out with a much more power-efficient architecture," he said. "They didn't focus on megahertz, but they got reasonable performance."
There was also the fact that the market was becoming more mobile, and NetBurst parts were unsuited for the cramped insides and relatively low-power capabilities of laptops and notebooks.
These were not the best of times for Intel. "I gotta admit," Pawlowski said, "when I left the labs and came to the product group, it was brutal, because in 2005 when we were really at the dip, at the low spot of where our architecture was competitively, because we were still pushing megahertz."
To make matters worse, he was getting needled about the competition. "I got the question, 'Why didn't you guys integrate the memory controller? How did little AMD just beat you guys to it?'" His response was: "They had nothing to lose. They really didn't."
Fortunately, as Pawlowski tells it, Intel's Israeli design team was working on a P6-based part in an effort to attempt to integrate an on-die memory controller with a Rambus memory subsystem. That part never came to fruition, but some of the project's P6 refinements made it into the Pentium M, code-named Banias, and the Core microarchitecture, which helped salvage Intel's mobile future.
Next page: Getting high, m'K?
What's that in brontosauri?
Succeeded despit etechnology not because of it
The story of teh scusess of intel microprocesors is that commercial and not technical factors dominate.
The 8086 was very much inferior to the 68K and the 16032 it was probably on a par with the Z8000. I rember Intel trying to sell to me at that time and they always emphasised price, the agreement with AMD that gave guarantee of supply and assurance on pricing, and support. They never tried to sell on performance or technical aspects because it was well behind Motorola.
The PC then came out and things changed very rapidly. Intel broke the AMD arrangement and the price of the first non-agreement part the 80287 sky rocketed. Technically intel parts were still very much second best but they sold fantastic numbers o fparts. The 80286 retained the awkward segmented architecture extended withprotected mode performance was still very poor. The 386 finally had a sensible memory architecture but still had the nasty special purpose registers and complicate dinstruction set and performacnce was still very poor compard to other micros. It was probably not until the pentium that Intel gained parity with other microprocessors.
None of these technical things mattered, one design decision by IBM made Intel the dominant microprocessor company with massive reources despite not because of their technical design.
Ahh, those were the days...
...when bytes were were real bytes, Motherboards could be fixed with a soldering iron, "intellectual property" meant you'd paid off your Encyclopedia Britannia, and 'programming' meant hand coding raw MC. Maybe assembler if hung over.
And yes, counting every damn clock cycle.
God, I feel old.... <sniff.>
"After the 8086/8 came the 80286..."
No, it didn't. After the 8086/8 came the 80186/8, which was then followed by the 80286.
I remember coding in 80186 assembly on my dad's Tandy 2000...
A few corrections more...
1. IT was not built on the Intel 4004 or its successors. The information technology industry started in the 1950s with pioneering data processing applications leveraging emerging computing technology. Remember LEO, and the IBM 1401? They were certainly information technology systems. You'd have to use a pretty discrete and tortured definition of IT to claim the 4004 was its first building brick.
2. You use the phrase 'first processor' to describe the 4004. Here comes more pedantry... This is not true either. It was the first commodity, commercially available microprocessor -- which is to say an IC with all the traditional components of a CPU. Computer processors in modern sense date back to at least 1949 and EDSAC. The Digital PDP-11, a direct contemporary of the 4004, certainly has a processor, as did all it's ancestors. What it didn't have was a single chip 'microprocessor.'