Feeds

MIT boffins: Use software to fix errors made by decaying silicon

Unreliable kit as a resource for devs, not a reason to reach for the screwdriver

Internet Security Threat Report 2014

Smaller transistors means more noise means more errors means the collapse of everything we know and love about computers, such as their infallible ability to run with perfect stability for years on end … right?

Well, perhaps not such perfection, but the basic problem remains: ever-shrinking microprocessor feature sizes will some day start bumping into physics and transistors will start producing increasing numbers of random errors.

That's leading some researchers, including a group at MIT, to propose simply letting the errors happen. As described in this media announcement, the idea is twofold: some bit errors can be ignored (who's going to notice a wrong pixel or two in a high-definition movie?), and others can be corrected in software.

In their full paper, the researchers present a programming language called Rely: its job is to work on the assumption that “soft errors” are going to emerge from transistors, and instead “enables developers to reason about the quantitative reliability of an application – namely, the probability that it produces the correct result when executed on unreliable hardware.”

To do this, Rely captures “a set of constraints that is sufficient to ensure that a function satisfies its reliability specification when executed on the underlying unreliable hardware platform”. In other words, it's designed to answer the question “what's the probability that the hardware will produce a result without errors, or within an acceptable error boundary?”, and if Rely assesses the results as equal to or better than the prediction, it allows the result as “correct”.

As MIT's Martin Rinard puts it in the media statement: “Rather than making [unreliable hardware] a problem, we’d like to make it an opportunity. What we have here is a … system that lets you reason about the effect of this potential unreliability on your program.”

To function, Rely needs as a starting condition an assessment of the likely reliability of the underlying hardware. It also assumes that an error-free operation mode exists – whether by slowing down the hardware's clock speed, or by running it at higher power for a while – against which the use-case can be baselined.

What the researchers are pleased about is that they've found a simple way for programmers to flag instructions that can tolerate errors: they simply tag the instruction (or program) with a dot. If it encounters a dot (for example, if it sees that the instruction is written TOTAL = TOTAL + .INPUT), Rely knows to assess the output against the specified failure rates.

At the moment, the “dot-tagged” code is designed so that users can test the performance of a program against expectations, and refine their code by removing the dot-tags if they find no execution errors. In future work, Rely's developers want to allow the tagging of entire blocks of code, so that for example they can stipulate “only 97 per cent of the pixels in this frame of video have to be decoded correctly”.

Of course, not everybody agrees that software-correcting-hardware is “the way of the future”. It would be utterly remiss of The Register to ignore the debate, especially when the counter-argument, from distributed systems researcher at Microsoft, James Mickens, will probably stand as a classic of IT comedic writing.

“[John] discovered several papers that described software-assisted hardware recovery. The basic idea was simple: if hardware suffers more transient failures as it gets smaller, why not allow software to detect erroneous computations and re-execute them? This idea seemed promising until John realized THAT IT WAS THE WORST IDEA EVER. Modern software barely works when the hardware is correct, so relying on software to correct hardware errors is like asking Godzilla to prevent Mega-Godzilla from terrorizing Japan. THIS DOES NOT LEAD TO RISING PROPERTY VALUES IN TOKYO”.

The full article, published at Usenix, is here. Enjoy. ®

Remote control for virtualized desktops

More from The Register

next story
Download alert: Nearly ALL top 100 Android, iOS paid apps hacked
Attack of the Clones? Yeah, but much, much scarier – report
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Whistling Google: PLEASE! Brussels can only hurt Europe, not us
And Commish is VERY pro-Google. Why should we worry?
Microsoft: Your Linux Docker containers are now OURS to command
New tool lets admins wrangle Linux apps from Windows
Soz, web devs: Google snatches its Wallet off the table
Killing off web service in 3 months... but app-happy bonkers are fine
First in line to order a Nexus 6? AT&T has a BRICK for you
Black Screen of Death plagues early Google-mobe batch
prev story

Whitepapers

Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
5 critical considerations for enterprise cloud backup
Key considerations when evaluating cloud backup solutions to ensure adequate protection security and availability of enterprise data.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Protecting against web application threats using SSL
SSL encryption can protect server‐to‐server communications, client devices, cloud resources, and other endpoints in order to help prevent the risk of data loss and losing customer trust.