MIT boffins: Use software to fix errors made by decaying silicon

Unreliable kit as a resource for devs, not a reason to reach for the screwdriver

Build a business case: developing custom apps

Smaller transistors means more noise means more errors means the collapse of everything we know and love about computers, such as their infallible ability to run with perfect stability for years on end … right?

Well, perhaps not such perfection, but the basic problem remains: ever-shrinking microprocessor feature sizes will some day start bumping into physics and transistors will start producing increasing numbers of random errors.

That's leading some researchers, including a group at MIT, to propose simply letting the errors happen. As described in this media announcement, the idea is twofold: some bit errors can be ignored (who's going to notice a wrong pixel or two in a high-definition movie?), and others can be corrected in software.

In their full paper, the researchers present a programming language called Rely: its job is to work on the assumption that “soft errors” are going to emerge from transistors, and instead “enables developers to reason about the quantitative reliability of an application – namely, the probability that it produces the correct result when executed on unreliable hardware.”

To do this, Rely captures “a set of constraints that is sufficient to ensure that a function satisfies its reliability specification when executed on the underlying unreliable hardware platform”. In other words, it's designed to answer the question “what's the probability that the hardware will produce a result without errors, or within an acceptable error boundary?”, and if Rely assesses the results as equal to or better than the prediction, it allows the result as “correct”.

As MIT's Martin Rinard puts it in the media statement: “Rather than making [unreliable hardware] a problem, we’d like to make it an opportunity. What we have here is a … system that lets you reason about the effect of this potential unreliability on your program.”

To function, Rely needs as a starting condition an assessment of the likely reliability of the underlying hardware. It also assumes that an error-free operation mode exists – whether by slowing down the hardware's clock speed, or by running it at higher power for a while – against which the use-case can be baselined.

What the researchers are pleased about is that they've found a simple way for programmers to flag instructions that can tolerate errors: they simply tag the instruction (or program) with a dot. If it encounters a dot (for example, if it sees that the instruction is written TOTAL = TOTAL + .INPUT), Rely knows to assess the output against the specified failure rates.

At the moment, the “dot-tagged” code is designed so that users can test the performance of a program against expectations, and refine their code by removing the dot-tags if they find no execution errors. In future work, Rely's developers want to allow the tagging of entire blocks of code, so that for example they can stipulate “only 97 per cent of the pixels in this frame of video have to be decoded correctly”.

Of course, not everybody agrees that software-correcting-hardware is “the way of the future”. It would be utterly remiss of The Register to ignore the debate, especially when the counter-argument, from distributed systems researcher at Microsoft, James Mickens, will probably stand as a classic of IT comedic writing.

“[John] discovered several papers that described software-assisted hardware recovery. The basic idea was simple: if hardware suffers more transient failures as it gets smaller, why not allow software to detect erroneous computations and re-execute them? This idea seemed promising until John realized THAT IT WAS THE WORST IDEA EVER. Modern software barely works when the hardware is correct, so relying on software to correct hardware errors is like asking Godzilla to prevent Mega-Godzilla from terrorizing Japan. THIS DOES NOT LEAD TO RISING PROPERTY VALUES IN TOKYO”.

The full article, published at Usenix, is here. Enjoy. ®

Gartner critical capabilities for enterprise endpoint backup

More from The Register

next story
'Stop dissing Google or quit': OK, I quit, says Code Club co-founder
And now a message from our sponsors: 'STFU or else'
Why has the web gone to hell? Market chaos and HUMAN NATURE
Tim Berners-Lee isn't happy, but we should be
Microsoft boots 1,500 dodgy apps from the Windows Store
DEVELOPERS! DEVELOPERS! DEVELOPERS! Naughty, misleading developers!
Mozilla's 'Tiles' ads debut in new Firefox nightlies
You can try turning them off and on again
Apple promises to lift Curse of the Drained iPhone 5 Battery
Have you tried turning it off and...? Never mind, here's a replacement
Uber, Lyft and cutting corners: The true face of the Sharing Economy
Casual labour and tired ideas = not really web-tastic
Linux turns 23 and Linus Torvalds celebrates as only he can
No, not with swearing, but by controlling the release cycle
prev story


Top 10 endpoint backup mistakes
Avoid the ten endpoint backup mistakes to ensure that your critical corporate data is protected and end user productivity is improved.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up distributed data
Eliminating the redundant use of bandwidth and storage capacity and application consolidation in the modern data center.
The essential guide to IT transformation
ServiceNow discusses three IT transformations that can help CIOs automate IT services to transform IT and the enterprise
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.