Change you can believe in
While the 80286 was essentially an update to the 8086, the "real change" came with the 32-bit 386, Pawlowski said.
"The beauty of it is that it went to large segments," Pawlowski said. "So instead of having the typical 64k segment architecture, they actually could go the full flat address space and go to four gigs."
As he recalls it: "The big problem we were facing with Motorola and the 68K – which was the competition at the time – was they had a flat address space and we were segmented, because that was the architecture we'd chosen to build the 8086-based architecture on."
Pawlowski worked on the first Multibus board built for the 386. On that board, his team added a 64K direct-mapped cache in front of the 386. "It wasn't integrated inside the part," he told us, "but it was a 16MHz clock, and so we were getting to the point where we were starting to see some of the stress points of the memory architecture – memory access patterns, which were 150 nanoseconds."
But with the 64K direct-mapped cache, "We did some pretty nifty little things," he said. "And it ran 16-bit code really well, so that was the real success."
The 386 was the chip around which Intel started building motherboards. When the 486 came along, it integrated that motherboard cache into the chip itself, and it also integrated the math coprocessor in the 486DX version. The 386 had still relied on the separate 387 chip – and, yes, there was a 386DX, but that designation had nothing to do with an on-chip FPU.
After the 386 and the 486 came not the 586, but instead a chip that was rechristened by the Intel marketing department as the Pentium, and was built using a new microarchitecture known internally as P5.
"That became the first superscalar machine," Pawlowski told us, superscalar being the term of art that describes a processor that has more than one concurrent execution sequence, or pipeline.
"That's where we actually had multiple execution units," he said. "Not necessarily the same, but the scheduler was at least smart enough to look inside the machine, if it had to do an add, had to do a multiply, potentially some type of fetch, or some other type of instruction, it could actually look for places where it could get more locality out of the instruction, out of the machine itself."
Next page: Playing catch-up with Motorola
What's that in brontosauri?
Succeeded despit etechnology not because of it
The story of teh scusess of intel microprocesors is that commercial and not technical factors dominate.
The 8086 was very much inferior to the 68K and the 16032 it was probably on a par with the Z8000. I rember Intel trying to sell to me at that time and they always emphasised price, the agreement with AMD that gave guarantee of supply and assurance on pricing, and support. They never tried to sell on performance or technical aspects because it was well behind Motorola.
The PC then came out and things changed very rapidly. Intel broke the AMD arrangement and the price of the first non-agreement part the 80287 sky rocketed. Technically intel parts were still very much second best but they sold fantastic numbers o fparts. The 80286 retained the awkward segmented architecture extended withprotected mode performance was still very poor. The 386 finally had a sensible memory architecture but still had the nasty special purpose registers and complicate dinstruction set and performacnce was still very poor compard to other micros. It was probably not until the pentium that Intel gained parity with other microprocessors.
None of these technical things mattered, one design decision by IBM made Intel the dominant microprocessor company with massive reources despite not because of their technical design.
Ahh, those were the days...
...when bytes were were real bytes, Motherboards could be fixed with a soldering iron, "intellectual property" meant you'd paid off your Encyclopedia Britannia, and 'programming' meant hand coding raw MC. Maybe assembler if hung over.
And yes, counting every damn clock cycle.
God, I feel old.... <sniff.>
"After the 8086/8 came the 80286..."
No, it didn't. After the 8086/8 came the 80186/8, which was then followed by the 80286.
I remember coding in 80186 assembly on my dad's Tandy 2000...
A few corrections more...
1. IT was not built on the Intel 4004 or its successors. The information technology industry started in the 1950s with pioneering data processing applications leveraging emerging computing technology. Remember LEO, and the IBM 1401? They were certainly information technology systems. You'd have to use a pretty discrete and tortured definition of IT to claim the 4004 was its first building brick.
2. You use the phrase 'first processor' to describe the 4004. Here comes more pedantry... This is not true either. It was the first commodity, commercially available microprocessor -- which is to say an IC with all the traditional components of a CPU. Computer processors in modern sense date back to at least 1949 and EDSAC. The Digital PDP-11, a direct contemporary of the 4004, certainly has a processor, as did all it's ancestors. What it didn't have was a single chip 'microprocessor.'