Why data storage technology is pretty much PERFECT
There's nothing to be done here... at least on the error-correction front
All present and correct
Without the combination of reliability and storage density it allows, the things we use every day simply wouldn’t work. The images from our digital cameras would be ruined by spots that would make us prefer the grain of traditional film. Our hi-fi would emit gunshots making the crackles of vinyl infinitely preferable and the supermarket barcode reader would mistake the lady in the tweed coat for a tin of baked beans. And whether you could call flash cards or discs compact if they were a metre across is another issue.
How "compact" optical media might have emerged without Reed-Solomon error correction
With the help of error correction, recording densities will keep increasing until fundamental limits are reached. The flash memory using one electron per bit; the disk where one magnetised molecule represents a bit; the optical disc that uses ultra short wavelength light. Maybe it would be called Gamma-Ray. Or a quarkcorder called Murray. More likely storage capacities would level out before those limits are reached. When storage costs are negligible there is no point in making them more negligible.
Making do with perfection
Information theory, first outlined on a scientific basis by Claude Shannon, determines theoretical limits to the correcting power of a system in the same way that the laws of thermodynamics place a limit on the efficiency of heat engines.
However, in the real world, no machines reach the theoretical efficiency limit. Yet the Reed-Solomon error-correcting codes actually operate at the theoretical limit set by information theory. No more powerful code can ever be devised and further research is pointless.
The degree of perfection achieved by error-correction systems is remarkable even by the standards of technology. I suspect this is because the theory of error correction is so specialised and arcane that politicians and beancounters have either never heard of it or daren’t mess with it and it is left to people who know what they are doing.
In contrast, anyone can understand water flowing in a pipe and that is why our drinking water system is in such a shambles - with much of it running to waste through leaks.
Although the coding limits of error correction have been reached, that does not mean that no progress is possible. Error correction and channel coding both require processing power to encode and decode the information and that processing power follows Moore’s Law.
Thus the cost and size of a coding system both diminish with time, or the complexity can increase, making new applications possible. However, if some new binary data storage device is invented in the future using a medium that we are presently not aware of, the error correction will still be based on Reed-Solomon coding. ®
John Watkinson is a member of the British Computer Society and a Chartered Information Technology Professional. He is also an international consultant on digital imaging and the author of numerous books regarded as industry bibles.
Sponsored: Hyper-scale data management