Feeds

Get ready to buy chips by the kilo

So begins the era of 'Unbound Compute'

  • alert
  • submit to reddit

Security for virtualized datacentres

Industry Comment We’ve spent 20 years assuming that we add memory and disk in large numbers and CPUs in small numbers. What if all three scaled in the same way? Now, that would be a game changing innovation, one that would spawn a new age for business applications and raise the bar on IT productivity and business efficiency.

Remember way back when PCs had a grand sum of 64 kilo bytes of memory? These days, we count the memory in small laptops in hundreds of mega bytes and the memory in big servers in fractions of tera bytes. The same thing happened to disk space: mega bytes to peta bytes. What’s next?  exa, zeta, and yotta. 

But when it comes to CPUs, we still mostly dabble in single digits. An 8-way server feels like a pretty large system. The 32-way, 64-way, and 200-way systems feel just huge.  Even when we scale out, anything beyond a couple of hundred CPUs begins to challenge our ability to manage and operate the systems. It’s no accident that they call these systems a “complex.”

A major shift is coming. Over the next few years, your ordinary applications will be able to tap into systems with, say, 7,000 CPUs, 50 tera bytes of memory, and 20 peta bytes of storage. In 2005, Azul Systems will ship compute pools with as many as 1,200 CPUs per a single standard rack (1.2 kilo cores! - I like the sound of that!)

What would change about application design if you could do this? Well, think back to what applications were like when you had just 128K of memory in your PC and a 512KB hard drive. The difference between the capabilities and flexibility of applications in those days and now is the level of improvement that we are talking about.

Photo of Shahin Khan, CMO at Azul Systems

If you could count CPUs the same way that you count memory, some problems would simply become uninteresting and others would transform in a qualitative way. And completely new possibilities would emerge.

Deployment and administration of applications would also change dramatically. Do you ever worry about how much storage an individual user might need?  Probably not. You just install a NAS device with a tera byte of storage and let everyone share it. This approach works because no single user is likely to fill it up quickly, and you can plan storage capacity across all your users rather than each individual one. Do you ever worry about the utilization level of an individual byte of memory? I hope not. You have so many bytes that you measure utilization at the aggregate level.

If you had hundreds of CPUs in a miniaturized “big-iron” system that were available to your applications, you could adopt the same strategy for applications. No need to plan capacity for each individual application. Let all of your users share a huge compute pool and plan capacity across many applications. In the process, you also fundamentally change the economics of computing. Well, that’s exactly what Azul Systems is pioneering.

This is a whole new way of looking at the CPU, and therefore, the function of “compute.” This approach is gaining mainstream acceptance. The industry has reached 2 or 4 CPUs on a chip for large symmetric multiprocessing (SMP) systems; and for systems limited to one chip, tens of functional units in one CPU. Some companies have announced future chips with as many as 8 CPUs on a single chip. With 24 CPUs on a chip that can be used in an SMP system, Azul has already set the bar much higher. And that’s just the beginning!

Get ready for an era when you can order CPUs by the thousands. And get ready for the new language of that era: Do we say: 2.5 kilo CPUs? Do we call this kilo core, or mega core processing? And since it goes way past current multi-core technology, do we call it poly-core technology?

Here is a possible headline in 2005:

Poly-core Technology to Enable Kilo Core Processing. Happy Apps Hail Freedom!!

Happy 2005! ®

Azul Systems has created one of the most radical processor designs to date. Its Vega processor sits at the heart of a Java crunching server due out in the first half of this year. More information on the company's upcoming products can be found here.

Security for virtualized datacentres

More from The Register

next story
Just don't blame Bono! Apple iTunes music sales PLUMMET
Cupertino revenue hit by cheapo downloads, says report
The DRUGSTORES DON'T WORK, CVS makes IT WORSE ... for Apple Pay
Goog Wallet apparently also spurned in NFC lockdown
Hey - who wants 4.8 TERABYTES almost AS FAST AS MEMORY?
China's Memblaze says they've got it in PCIe. Yow
Cray-cray Met Office spaffs £97m on VERY AVERAGE HPC box
Only 250th most powerful in the world? Bring back Michael Fish
IBM, backing away from hardware? NEVER!
Don't be so sure, so-surers
Microsoft brings the CLOUD that GOES ON FOREVER
Sky's the limit with unrestricted space in the cloud
'ANYTHING BUT STABLE' Netflix suffers BIG Europe-wide outage
Friday night LIVE? Nope. The only thing streaming are tears down my face
Google roolz! Nest buys Revolv, KILLS new sales of home hub
Take my temperature, I'm feeling a little bit dizzy
prev story

Whitepapers

Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.
Internet Security Threat Report 2014
An overview and analysis of the year in global threat activity: identify, analyze, and provide commentary on emerging trends in the dynamic threat landscape.