Feeds

MIT boffins: Use software to fix errors made by decaying silicon

Unreliable kit as a resource for devs, not a reason to reach for the screwdriver

New hybrid storage solutions

Smaller transistors means more noise means more errors means the collapse of everything we know and love about computers, such as their infallible ability to run with perfect stability for years on end … right?

Well, perhaps not such perfection, but the basic problem remains: ever-shrinking microprocessor feature sizes will some day start bumping into physics and transistors will start producing increasing numbers of random errors.

That's leading some researchers, including a group at MIT, to propose simply letting the errors happen. As described in this media announcement, the idea is twofold: some bit errors can be ignored (who's going to notice a wrong pixel or two in a high-definition movie?), and others can be corrected in software.

In their full paper, the researchers present a programming language called Rely: its job is to work on the assumption that “soft errors” are going to emerge from transistors, and instead “enables developers to reason about the quantitative reliability of an application – namely, the probability that it produces the correct result when executed on unreliable hardware.”

To do this, Rely captures “a set of constraints that is sufficient to ensure that a function satisfies its reliability specification when executed on the underlying unreliable hardware platform”. In other words, it's designed to answer the question “what's the probability that the hardware will produce a result without errors, or within an acceptable error boundary?”, and if Rely assesses the results as equal to or better than the prediction, it allows the result as “correct”.

As MIT's Martin Rinard puts it in the media statement: “Rather than making [unreliable hardware] a problem, we’d like to make it an opportunity. What we have here is a … system that lets you reason about the effect of this potential unreliability on your program.”

To function, Rely needs as a starting condition an assessment of the likely reliability of the underlying hardware. It also assumes that an error-free operation mode exists – whether by slowing down the hardware's clock speed, or by running it at higher power for a while – against which the use-case can be baselined.

What the researchers are pleased about is that they've found a simple way for programmers to flag instructions that can tolerate errors: they simply tag the instruction (or program) with a dot. If it encounters a dot (for example, if it sees that the instruction is written TOTAL = TOTAL + .INPUT), Rely knows to assess the output against the specified failure rates.

At the moment, the “dot-tagged” code is designed so that users can test the performance of a program against expectations, and refine their code by removing the dot-tags if they find no execution errors. In future work, Rely's developers want to allow the tagging of entire blocks of code, so that for example they can stipulate “only 97 per cent of the pixels in this frame of video have to be decoded correctly”.

Of course, not everybody agrees that software-correcting-hardware is “the way of the future”. It would be utterly remiss of The Register to ignore the debate, especially when the counter-argument, from distributed systems researcher at Microsoft, James Mickens, will probably stand as a classic of IT comedic writing.

“[John] discovered several papers that described software-assisted hardware recovery. The basic idea was simple: if hardware suffers more transient failures as it gets smaller, why not allow software to detect erroneous computations and re-execute them? This idea seemed promising until John realized THAT IT WAS THE WORST IDEA EVER. Modern software barely works when the hardware is correct, so relying on software to correct hardware errors is like asking Godzilla to prevent Mega-Godzilla from terrorizing Japan. THIS DOES NOT LEAD TO RISING PROPERTY VALUES IN TOKYO”.

The full article, published at Usenix, is here. Enjoy. ®

Security for virtualized datacentres

More from The Register

next story
Not appy with your Chromebook? Well now it can run Android apps
Google offers beta of tricky OS-inside-OS tech
Greater dev access to iOS 8 will put us AT RISK from HACKERS
Knocking holes in Apple's walled garden could backfire, says securo-chap
NHS grows a NoSQL backbone and rips out its Oracle Spine
Open source? In the government? Ha ha! What, wait ...?
Google extends app refund window to two hours
You now have 120 minutes to finish that game instead of 15
Intel: Hey, enterprises, drop everything and DO HADOOP
Big Data analytics projected to run on more servers than any other app
New 'Cosmos' browser surfs the net by TXT alone
No data plan? No WiFi? No worries ... except sluggish download speed
prev story

Whitepapers

Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
Top 5 reasons to deploy VMware with Tegile
Data demand and the rise of virtualization is challenging IT teams to deliver storage performance, scalability and capacity that can keep up, while maximizing efficiency.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.
Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.