Feeds

Exploding core counts: Heading for the buffers

Ferrari engine meet go-kart

Beginner's guide to SSL certificates

Gartner's analysis does, of course, leave out one important issue. The main bottleneck on system performance is arguably - and man, do people argue about this - the limits on main memory capacity and bandwidth inside systems. In many cases, customers upgrade server platforms not because they need more CPU cores, but because they want both more memory and more bandwidth into and out of the CPUs.

Moreover, for some workloads - this is particularly true of online transaction processing - the amount of work a machine can do is more affected by the number of disk drive arms and the bandwidth in the disk subsystems than other factors, like the number of processor cores. In benchmark tests, server makers can get their server processors running at 95 per cent or higher utilization, but it is a very well run big iron box running Unix that can consistently stay at even a 60 to 70 per cent utilization rate running OLTP workloads.

I/O and memory bandwidth issues keep the processors tapping their feet, waiting for data. IBM's mainframe operating systems and middleware, as well as end user applications have been tuned and tweaked over decades to wring every ounce of performance out of the box and run at 90 per cent or higher utilization rates in production environments, but if you paid five or ten times the amount it costs to buy an RISC or x64 server, you would spend a lot of dough on tuning, too. And having done all that work, you would sure as hell think twice before moving those applications off the mainframe. Which is why mainframes persist.

The biggest issue, it seems, is that memory speeds have not even come close to keeping pace with processor speeds, which has been mitigated to a certain extent by the thermal wall that processors have hit. This is giving memory speeds a chance to catch up, perhaps. But the fastest DDR3 memory on the market still tops out at 1.3 GHz, and that is still less than half the speed of, say, a Nehalem Xeon processor that will hit the streets later this quarter. And even if you could get the speeds of CPU cores and memory in line, that doesn't solve the capacity issue.

Memory DIMMs can only be so small at a certain price per capacity, and motherboard makers can only put so many wires on the board for memory at a price. The memory issue is not going away. But solving this will perhaps be easier than coping with software stacks that don't understand how to make use of so many threads. ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
IT crisis looming: 'What if AWS goes pop, runs out of cash?'
Public IaaS... something's gotta give - and it may be AWS
Linux? Bah! Red Hat has its eye on the CLOUD – and it wants to own it
CEO says it will be 'undisputed leader' in enterprise cloud tech
Oracle SHELLSHOCKER - data titan lists unpatchables
Database kingpin lists 32 products that can't be patched (yet) as GNU fixes second vuln
Ello? ello? ello?: Facebook challenger in DDoS KNOCKOUT
Gets back up again after half an hour though
Hey, what's a STORAGE company doing working on Internet-of-Cars?
Boo - it's not a terabyte car, it's just predictive maintenance and that
Troll hunter Rackspace turns Rotatable's bizarro patent to stone
News of the Weird: Screen-rotating technology declared unpatentable
prev story

Whitepapers

Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.
Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.