Feeds

Exploding core counts: Heading for the buffers

Ferrari engine meet go-kart

Secure remote control for conventional and virtual desktops

Gartner's analysis does, of course, leave out one important issue. The main bottleneck on system performance is arguably - and man, do people argue about this - the limits on main memory capacity and bandwidth inside systems. In many cases, customers upgrade server platforms not because they need more CPU cores, but because they want both more memory and more bandwidth into and out of the CPUs.

Moreover, for some workloads - this is particularly true of online transaction processing - the amount of work a machine can do is more affected by the number of disk drive arms and the bandwidth in the disk subsystems than other factors, like the number of processor cores. In benchmark tests, server makers can get their server processors running at 95 per cent or higher utilization, but it is a very well run big iron box running Unix that can consistently stay at even a 60 to 70 per cent utilization rate running OLTP workloads.

I/O and memory bandwidth issues keep the processors tapping their feet, waiting for data. IBM's mainframe operating systems and middleware, as well as end user applications have been tuned and tweaked over decades to wring every ounce of performance out of the box and run at 90 per cent or higher utilization rates in production environments, but if you paid five or ten times the amount it costs to buy an RISC or x64 server, you would spend a lot of dough on tuning, too. And having done all that work, you would sure as hell think twice before moving those applications off the mainframe. Which is why mainframes persist.

The biggest issue, it seems, is that memory speeds have not even come close to keeping pace with processor speeds, which has been mitigated to a certain extent by the thermal wall that processors have hit. This is giving memory speeds a chance to catch up, perhaps. But the fastest DDR3 memory on the market still tops out at 1.3 GHz, and that is still less than half the speed of, say, a Nehalem Xeon processor that will hit the streets later this quarter. And even if you could get the speeds of CPU cores and memory in line, that doesn't solve the capacity issue.

Memory DIMMs can only be so small at a certain price per capacity, and motherboard makers can only put so many wires on the board for memory at a price. The memory issue is not going away. But solving this will perhaps be easier than coping with software stacks that don't understand how to make use of so many threads. ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
Ellison: Sparc M7 is Oracle's most important silicon EVER
'Acceleration engines' key to performance, security, Larry says
Linux? Bah! Red Hat has its eye on the CLOUD – and it wants to own it
CEO says it will be 'undisputed leader' in enterprise cloud tech
Oracle SHELLSHOCKER - data titan lists unpatchables
Database kingpin lists 32 products that can't be patched (yet) as GNU fixes second vuln
Ello? ello? ello?: Facebook challenger in DDoS KNOCKOUT
Gets back up again after half an hour though
Hey, what's a STORAGE company doing working on Internet-of-Cars?
Boo - it's not a terabyte car, it's just predictive maintenance and that
prev story

Whitepapers

A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.