Memory speed and overclocking Core i7
When you overclock a Core i7 965 or 975 Extreme, you have the option of raising the multiplier. However, anyone with a Core i7 920 is obliged to raise the 133MHz base clock speed as the clock multiplier can't be raised above the standard 20x speed.
PCMark Vantage Results
Longer bars are better
For this set of test runs we switched to an EVGA X58 SLI motherboard as the overclocking options are relatively limited on the Intel DX58SO. As we were going to overclock the memory, we switched from the Qimonda 1066MHz DDR 3 to OCZ Reaper 1800MHz DDR 3. This happens to be a 3x2GB kit, so we installed 64-bit Vista on the Intel X25-M SSD.
We ran the Core i7 at three different base speeds and dropped the clock multiplier to keep the processor clock speed constant. For the first run, we tested the Core i7 920 at the standard speed 2.66GHz (20 x 133MHz) with the memory running at 1066MHz. Next we raised the base speed from 133MHz to 166MHz and dropped the multiplier to give a speed of 2.66GHz (16 x 166MHz) which raised the memory speed to 1333MHz. The increase in memory clock speed slowed the SPD settings from 7-7-7-18-1T to 9-9-9-20-1T. However, the memory bandwidth was increased and the latency reduced. This is all good positive stuff but the effect on system performance was negligible.
For the next step, we raised the base clock to 175MHz and reduced the multiplier to 15x for a processor speed of 2.63GHz and a memory speed of 1400MHz. There are ‘natural’ speeds for memory that are set in the SPD.
For most DDR 3 chips, these speeds will include 1066MHz, 1333MHz, 1600MHz and 1866MHz, while odd speeds, such as 1400MHz, tend to throw things out of kilter. On these settings, the memory bandwidth increased slightly. However, the system performance dropped quite markedly. The object of this particular exercise was to overclock the Core i7 920 so we reset the clock multiplier to the standard 20x figure while keeping the base clock at 175MHz and the memory still at 1400MHz which raised the processor speed to 3.50GHz. That’s a healthy overclock from 2.66GHz and the extra performance is impressive but running the memory at an unnatural speed hurts performance.
SiSoft Sandra Results
Memory Bandwidth in Gigabytes per Second (GB/s)
Longer bars are better
Memory Latency in Nanoseconds (ns)
Shorter bars are better
Overclocking your Core i7 is a very good idea, but the memory speed is merely a tool that helps to unlock the performance in the processor.
@Dustin... be sure you comprehend before accusing someone of being ignorant
Ian didn't say Windows "couldn't do PAE - period". He said it couldn't do PAE because too many drivers couldn't handle it, having not been written with PAE in mind.
Yes, I find it truly surprising that an 8-DIMM dual-opteron setup was not tested in this article for Core i7 memory configs!!
This is very worthwhile reporting!
It is little known that the only fully performant memory configuration for dual processor AMD Opterons has been exactly 8 DIMMS of identical density, 4 on each socket, at least according to my tests. Other configurations give poorer measured performance, which may or may not be reported by the BIOS.
I have only tested with an in house tool, Opteron versions up to Barcelona. Anybody concerned about memory performance should repeat the tests on more modern hardware.
I find it amazing that such basic information is not clearly documented and is also rarely tested and reported-...
@jolyon - Yes, it's mainly a driver issue
Here is what Microsoft say on the issue -
And the wikipedia page on Physical Address Extension says -
"However, desktop versions of Windows (Windows XP, Windows Vista) limit physical address space to 4 GB for driver compatibility reasons."
Microsoft themselves confirm that >4GB is a no-go with 32-bit XP and Vista.
So, very limited PAE with Windows on the desktop and deeply scary compatibility issues with PAE on both servers and desktop. We've tried PAE on desktop and server Windows and it quickly became clear that the pain of moving to 64-bit was less than the pain of trying to get PAE stable and effective.
With Linux, we installed a PAE kernels, rebooted, and the servers all worked exactly as before but with much more memory. We are now moving to 64-bit (with virtualisation where required) but it has bought us a few years.
It matches my rule of thumb ...
... which says that more bog-standard memory trumps less but faster memory every time.
Slightly surprised that 3-channel offers no noticeable advantage. Perhaps it's time will come with future iterations and speed steps of Intel's new architecture. Anyway, there's a financial advantage: 12Gb without needing to buy expensive 4Gb DIMMS.
PAE and multicore CPUs means that 8Gb or even 12Gb may be sensible with 32-bit Linux: 2Gb or 3Gb per process, each running flat out in its own core. But if you aren't constrained by some sort of historical relic, 64-bit Linux should be today's default. I doubt I'll be doing many new 32-bit installs in the future.