Feeds

Get ready to buy chips by the kilo

So begins the era of 'Unbound Compute'

  • alert
  • submit to reddit

Security for virtualized datacentres

Industry Comment We’ve spent 20 years assuming that we add memory and disk in large numbers and CPUs in small numbers. What if all three scaled in the same way? Now, that would be a game changing innovation, one that would spawn a new age for business applications and raise the bar on IT productivity and business efficiency.

Remember way back when PCs had a grand sum of 64 kilo bytes of memory? These days, we count the memory in small laptops in hundreds of mega bytes and the memory in big servers in fractions of tera bytes. The same thing happened to disk space: mega bytes to peta bytes. What’s next?  exa, zeta, and yotta. 

But when it comes to CPUs, we still mostly dabble in single digits. An 8-way server feels like a pretty large system. The 32-way, 64-way, and 200-way systems feel just huge.  Even when we scale out, anything beyond a couple of hundred CPUs begins to challenge our ability to manage and operate the systems. It’s no accident that they call these systems a “complex.”

A major shift is coming. Over the next few years, your ordinary applications will be able to tap into systems with, say, 7,000 CPUs, 50 tera bytes of memory, and 20 peta bytes of storage. In 2005, Azul Systems will ship compute pools with as many as 1,200 CPUs per a single standard rack (1.2 kilo cores! - I like the sound of that!)

What would change about application design if you could do this? Well, think back to what applications were like when you had just 128K of memory in your PC and a 512KB hard drive. The difference between the capabilities and flexibility of applications in those days and now is the level of improvement that we are talking about.

Photo of Shahin Khan, CMO at Azul Systems

If you could count CPUs the same way that you count memory, some problems would simply become uninteresting and others would transform in a qualitative way. And completely new possibilities would emerge.

Deployment and administration of applications would also change dramatically. Do you ever worry about how much storage an individual user might need?  Probably not. You just install a NAS device with a tera byte of storage and let everyone share it. This approach works because no single user is likely to fill it up quickly, and you can plan storage capacity across all your users rather than each individual one. Do you ever worry about the utilization level of an individual byte of memory? I hope not. You have so many bytes that you measure utilization at the aggregate level.

If you had hundreds of CPUs in a miniaturized “big-iron” system that were available to your applications, you could adopt the same strategy for applications. No need to plan capacity for each individual application. Let all of your users share a huge compute pool and plan capacity across many applications. In the process, you also fundamentally change the economics of computing. Well, that’s exactly what Azul Systems is pioneering.

This is a whole new way of looking at the CPU, and therefore, the function of “compute.” This approach is gaining mainstream acceptance. The industry has reached 2 or 4 CPUs on a chip for large symmetric multiprocessing (SMP) systems; and for systems limited to one chip, tens of functional units in one CPU. Some companies have announced future chips with as many as 8 CPUs on a single chip. With 24 CPUs on a chip that can be used in an SMP system, Azul has already set the bar much higher. And that’s just the beginning!

Get ready for an era when you can order CPUs by the thousands. And get ready for the new language of that era: Do we say: 2.5 kilo CPUs? Do we call this kilo core, or mega core processing? And since it goes way past current multi-core technology, do we call it poly-core technology?

Here is a possible headline in 2005:

Poly-core Technology to Enable Kilo Core Processing. Happy Apps Hail Freedom!!

Happy 2005! ®

Azul Systems has created one of the most radical processor designs to date. Its Vega processor sits at the heart of a Java crunching server due out in the first half of this year. More information on the company's upcoming products can be found here.

Providing a secure and efficient Helpdesk

More from The Register

next story
IBM storage revenues sink: 'We are disappointed,' says CEO
Time to put the storage biz up for sale?
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
CAGE MATCH: Microsoft, Dell open co-located bit barns in Oz
Whole new species of XaaS spawning in the antipodes
VMware's tool to harden virtual networks: a spreadsheet
NSX security guide lands in intriguing format
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.