Intel: tweak Nehalem's nobs to hit high notes
"Gross degradation" in a box
Despite a tightened budget, Intel's information chief, Diane Bryant, claims the company saved $19 million by upgrading its older servers this year rather than deferring a hardware refresh until 2010.
Chipzilla's pitch for its Nehalem-based chips during a year when most IT budgets are pancake flat is focused squarely on demonstrating claims of cost-saving and consolidation. As such, Intel has trotted itself and others out as an examples for the cause.
Intel bases its $19m savings claim on utility costs, network maintenance, and data center infrastructure expenses, had the company decided to defect from its standard four-year server refresh policy due to 2009 budget restraints. The company said testing showed it could achieve a 10:1 consolidation ratio by replacing four-year-old servers based on single-core processors with new Xeon 5500 series chips.
The sentiments were echoed today in San Francisco with Intel hosting a talk with two US firms that say they've made a short return on investment with their own deployments of Nehalem. One is a privately-held trading group, Group One Trading, and the other compute outsourcing firm, R Systems.
While the talk was somewhat typical to what you'd expect for such a setup - Intel rah rah rah - one point that did stand out was the need to tinker with Nehalem settings before improvements often can be seen.
Back in June, Facebook's veep of technical operations Jonathan Heiliger was lamenting that both AMD and IBM's latest chip designs aren't bringing his company the performance gains as advertised. We suspected the lack of results were more a result of a heavily customized Facebook stack that uses PHP and MySQL than hardware.
Terence Judkins, director of systems at Group One Trading, recalls a similar situation on his company's initial tests with Nehalem.
"When we first got Nehalem - the first server we got off the shelf - we did not get good results out of it. In fact, we saw gross degradation over the previous 5400 series," Judkins said.
After scrutinizing the drop in performance, he said his team realized they had hyperthreading on by default and had the power profile set for low power utilization.
"What happens in the market place is, the market will suddenly spike up, and there's no time for a core to get up to maximum performance. So by downclocking those CPUs, we saw a performance hit," Judkins said.
"When we set power performance to maximum and turned off hyperthreading, we saw a little over 200 per cent performance increase in our trading software. It was a world of difference by just a few settings. So maybe Facebook needs to do some of their own analysis on how they're deriving their business metrics."
Intel's Bryant concurred that Judkins' experience was "spot on," saying Intel's electronic design automation (EDA) software apps are heavily single-threaded.
"There's enough knobs with Nehalem where you can get a performance pop, but you have to fit the knobs to get the benefit," she said.
Intel's past report on its saving by moving ahead with its 2009 server refresh can be found here. ®
Look Richard, I don't know what your experience is, but my multiprocessor experience dates back to the 1990s. See, the thing is, good designers aren't common, but bad coders are. If you increase the complexity of things by adding multiprocessor synchronisation to the coder's challenge, benchmark performance may appear to increase when the circumstances are right, but real-world productivity (which is harder to measure) and customer satisfaction (ditto) will generally decrease because the apps and systems will crash more frequently than they already do.
You are seriously suggesting developers re-code to take advantage of new hardware/architecture? I am not a developer, I'm more a sysadmin, but I cant remember a single time in history when developers radically changed their method/practices to better fit with the hardware.
Everyone was going to re-code for Itanium weren't they? Too hard.
How about the incredible power of cheap, fast GPUs, even software that would suit their particular kind of performance doesnt get re-coded. I do know that parallel programming is very hard, and most languages dont make it any easier... (hence the Erlang ref) We have had multi-core in the x86 market for 5-6 years and still Microsoft and Apple are announcing "some special multi-core features coming soon to the next version of their OS". At this rate we should have multi-core/multi-thread applications in widespread use by about 2050...
so when are we going to see the manual?
if Intel set the chips a certain way, that may not be providing the best performance, then what the user needs is a manual to suggest beneficial tweaks
so......... where's that manual?