Feeds

Decades ago, computing was saved by CMOS. Today, no hero is in sight

HPC headman sees the future – and it ain't pretty

Secure remote control for conventional and virtual desktops

SC13 The general chair of the SC13 supercomputing conference thinks the semiconducting industry has reached a tipping point more radical – and uncertain – than it has gone through in decades.

"We've reached the end of a technological era where we had a very stable technology," Bill Gropp, Thomas M. Siebel Chair in Computer Science at the University of Illinois Urbana-Champaign, told a group of reporters at the conference in Denver, Colorado, on Monday.

"We're about to get back to where we were about 25 years ago, when the technology suddenly changed on us," he said. To understand how to deal with the challenges ahead, he believes that it's wise to look back in time to understand how to think about the future.

When Gropp started his career – "I was a graduate student when the Cray 1 came out [in 1976]" – the computing technology of the day wasn't based on the complementary metal oxide semiconductor (CMOS) transistors that dominate today's chip industry. Rather, it was emitter-coupled logic (ECL) that was the switch of choice. ECL was fast – for its day – but only when provided with plenty of power, which made it run extremely hot.

"One of the major patents for the Cray 1 was how to cool it," Gropp said. It soon became clear to the industry that a replacement technology would be necessary if the computer industry was going to evolve.

Fortunately an alternative – not as good, but more scalable – was available.

"There was this niche technology that wasn't very good called CMOS," he said. "But it was mature enough to build components. It was kind of slow – sort of okay."

Equally fortunately, there was a giant company willing to take a risk on this sort-of-okay technology. "IBM made a big gamble and decided to switch from ECL," Gropp said. "They adopted CMOS, built a machine that was slower than previous generation machines, but had a technology that was starting its ramp up."

And, to borrow a cliché, the rest is history.

Flash forward to 2013. Citing data from past years' ITRS roadmaps – international assessments of the future of semiconductor technology – Gropp was blunt. "Moore's Law is already over," he said, noting that the semiconductor industry is no longer doubling transistor densities every 24 months or so, as that cherished engineering imperative directs.

CMOS scaling is petering out, even if such long-awaited life-extenders as extreme ultraviolet lithography (EUV) ever see the economically feasible light of day. Sooner or later you simply run out of atoms, as Intel Fellow Mark Bohr one told The Register.

But things are different today than during those ancient times when ECL hit the wall, Gropp says. "The problem is that right now we don't have a CMOS. We don't have a technology that is ready to be adopted as a replacement for CMOS."

All is not lost, of course – when you have this many clever, motivated engineers working on a problem, it never is. "We have a number of candidates. It's not that we don't have anything," Gropp said, citing such possibile CMOS replacements as RSFQ [rapid single flux quantum] superconducting logic and carbon nanotubes. But those and other candidates are, to put it kindly, not ready for prime time.

"We don't have anything that's at that level of maturity that will allow you to bet your company on as the next generation of hardware," he said. "That's the scary part.

So for the foreseeable future, it's CMOS – even though driving up performance using that material is unlikely to be economically feasible for long.

"We'll probably have to babysit CMOS for longer than we'd like while we mature some other technology," Gropp said, adding that there's a distinct downside to babysitting: if CMOS can be kept alive through extreme means for years longer, it removes the pressure to risk one's company on attempting to bring a different technology to maturity.

"The early adopters may not be the ones to succeed financially," he said. "The early adopters may be the ones that do the trailblazing and die."

When asked whether he thought that computing performance – HPC performance, specifically, seeing as how the confab was at SC13 – would continue to increase at the same rate that it has in the past decades, Gropp's answer was succinct: "No." ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
Ellison: Sparc M7 is Oracle's most important silicon EVER
'Acceleration engines' key to performance, security, Larry says
Oracle SHELLSHOCKER - data titan lists unpatchables
Database kingpin lists 32 products that can't be patched (yet) as GNU fixes second vuln
Ello? ello? ello?: Facebook challenger in DDoS KNOCKOUT
Gets back up again after half an hour though
Hey, what's a STORAGE company doing working on Internet-of-Cars?
Boo - it's not a terabyte car, it's just predictive maintenance and that
Troll hunter Rackspace turns Rotatable's bizarro patent to stone
News of the Weird: Screen-rotating technology declared unpatentable
prev story

Whitepapers

A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.