Intel Core 2 Duo performance preview
A taste of things to come, but when?
Preview Last week, Intel gathered the European press in Munich for a sneak peek at its the Core 2 Duo chip - aka 'Conroe'. Reg Hardware was there. We were limited to running a set of pre-installed benchmarks on the systems provided so this isn't a conclusive performance review of what Core 2 Duo is capable of...
Nonetheless, the numbers are very impressive, but let's start with a look at what's on offer and why you should consider moving to the Core 2 Duo platform. Intel is touting Core 2 Duo as a power-efficient processor that also offers better performance than the Pentium 4 and Pentium D chips.
Intel was keen to highlight some of the new functions of the Core 2 Duo processors. First of all there will be versions with either 2 or 4MB of shared cache - ot Intel Advanced Smart Cache, if you're into marketing wording. The two cores have to "fight" between themselves for cache allocation, and Intel claimed this technique delivered the best performance in their simulations.
One of the most important features is what Intel refers to as Wide Dynamic Execution, which allows for more data to be processed per clock cycle compared to previous generation of products. The Core 2 Duo processors can processes four full instructions per clock cycle, compared to the NetBurst architecture's three. A part of this is a technqiue Intel refers to as "macrofusion" which enables common pairs of instructions to be combined into a single instruction. The result: certain types of data can be processed in less time than it took on previous generations of Intel processors.
Smart Memory Access has been designed to lower memory latency and improve data access. The key to this is "memory disambiguation", which allows the execution cores to pre-load instructions that are about to be executed before the previous instruction has finished. This is based on a set of intelligent algorithms and it doesn't work under all circumstances, although it means that in most cases the processor will spend less time idling and more time processing data. A better memory pre-fetch system, with twin pre-fetches in both the L1 and L2 cache, should also improve the rate at which the correct data is being made available for the cores to process.
Sponsored: DevOps and continuous delivery