Feeds

MIT boffins: Use software to fix errors made by decaying silicon

Unreliable kit as a resource for devs, not a reason to reach for the screwdriver

Intelligent flash storage arrays

Smaller transistors means more noise means more errors means the collapse of everything we know and love about computers, such as their infallible ability to run with perfect stability for years on end … right?

Well, perhaps not such perfection, but the basic problem remains: ever-shrinking microprocessor feature sizes will some day start bumping into physics and transistors will start producing increasing numbers of random errors.

That's leading some researchers, including a group at MIT, to propose simply letting the errors happen. As described in this media announcement, the idea is twofold: some bit errors can be ignored (who's going to notice a wrong pixel or two in a high-definition movie?), and others can be corrected in software.

In their full paper, the researchers present a programming language called Rely: its job is to work on the assumption that “soft errors” are going to emerge from transistors, and instead “enables developers to reason about the quantitative reliability of an application – namely, the probability that it produces the correct result when executed on unreliable hardware.”

To do this, Rely captures “a set of constraints that is sufficient to ensure that a function satisfies its reliability specification when executed on the underlying unreliable hardware platform”. In other words, it's designed to answer the question “what's the probability that the hardware will produce a result without errors, or within an acceptable error boundary?”, and if Rely assesses the results as equal to or better than the prediction, it allows the result as “correct”.

As MIT's Martin Rinard puts it in the media statement: “Rather than making [unreliable hardware] a problem, we’d like to make it an opportunity. What we have here is a … system that lets you reason about the effect of this potential unreliability on your program.”

To function, Rely needs as a starting condition an assessment of the likely reliability of the underlying hardware. It also assumes that an error-free operation mode exists – whether by slowing down the hardware's clock speed, or by running it at higher power for a while – against which the use-case can be baselined.

What the researchers are pleased about is that they've found a simple way for programmers to flag instructions that can tolerate errors: they simply tag the instruction (or program) with a dot. If it encounters a dot (for example, if it sees that the instruction is written TOTAL = TOTAL + .INPUT), Rely knows to assess the output against the specified failure rates.

At the moment, the “dot-tagged” code is designed so that users can test the performance of a program against expectations, and refine their code by removing the dot-tags if they find no execution errors. In future work, Rely's developers want to allow the tagging of entire blocks of code, so that for example they can stipulate “only 97 per cent of the pixels in this frame of video have to be decoded correctly”.

Of course, not everybody agrees that software-correcting-hardware is “the way of the future”. It would be utterly remiss of The Register to ignore the debate, especially when the counter-argument, from distributed systems researcher at Microsoft, James Mickens, will probably stand as a classic of IT comedic writing.

“[John] discovered several papers that described software-assisted hardware recovery. The basic idea was simple: if hardware suffers more transient failures as it gets smaller, why not allow software to detect erroneous computations and re-execute them? This idea seemed promising until John realized THAT IT WAS THE WORST IDEA EVER. Modern software barely works when the hardware is correct, so relying on software to correct hardware errors is like asking Godzilla to prevent Mega-Godzilla from terrorizing Japan. THIS DOES NOT LEAD TO RISING PROPERTY VALUES IN TOKYO”.

The full article, published at Usenix, is here. Enjoy. ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
Preview redux: Microsoft ships new Windows 10 build with 7,000 changes
Latest bleeding-edge bits borrow Action Center from Windows Phone
Google opens Inbox – email for people too thick to handle email
Print this article out and give it to someone tech-y if you get stuck
Microsoft promises Windows 10 will mean two-factor auth for all
Sneak peek at security features Redmond's baking into new OS
UNIX greybeards threaten Debian fork over systemd plan
'Veteran Unix Admins' fear desktop emphasis is betraying open source
Entity Framework goes 'code first' as Microsoft pulls visual design tool
Visual Studio database diagramming's out the window
Google+ goes TITSUP. But WHO knew? How long? Anyone ... Hello ...
Wobbly Gmail, Contacts, Calendar on the other hand ...
DEATH by PowerPoint: Microsoft warns of 0-day attack hidden in slides
Might put out patch in update, might chuck it out sooner
Redmond top man Satya Nadella: 'Microsoft LOVES Linux'
Open-source 'love' fairly runneth over at cloud event
prev story

Whitepapers

Choosing cloud Backup services
Demystify how you can address your data protection needs in your small- to medium-sized business and select the best online backup service to meet your needs.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.