HGST: Nano-tech will double hard disk capacity in 10 years
Self-assembling molecules to boost drive density
HGST, the Western Digital subsidiary formerly known as Hitachi Global Storage Technologies, says it has developed a method of manufacturing hard-disk platters using nanotechnology that could double the density of today's hard drives.
The new technique employs a combination of self-assembling molecules and nanoimprinting, technologies previously associated with semiconductor manufacturing, to assemble patterns of tiny magnetic "islands," each no more than 10 nanometers wide – the width of about 50 atoms.
The resulting patterns are composed of 1.2 trillion dots per square inch, where each dot can store a single bit of information. That's roughly twice the density of today's hard-disk media, and HGST researchers say they are just scratching the surface of what can be achieved.
"With the proper chemistry and surface preparations, we believe this work is extendible to ever-smaller dimensions," HGST fellow Tom Albrecht said in a statement.
The aforementioned self-assembling molecules are so called because they are built from segments of hybrid polymers that repel each other. When coated onto a specially prepared surface as a thin film, the segments line up into perfect rows, like magic.
Once so arranged, the tiny building blocks can be manipulated using other chip-industry processes to form the desired structures before being nanoimprinted onto the disk substrate.
Each of these nano-scale dots can store one bit of data in a space no larger than 50 atoms across (source: HGST)
HGST's key breakthrough was in assembling these otherwise-rectangular features into the radial and circular patterns necessary for spinning-disk storage, which the company says it achieved through careful preparation of the surface onto which the self-assembling molecules were applied.
The chip industry has long eyed nanolithography as a potential alternative to current photolithography processes, which have grown increasingly complex and expensive as the scale of semiconductor features has shrunk.
While it may one day be possible to assemble such complex components as microprocessors using this type of nanolithography, many researchers believe its more immediate use will be for applications such as disk drives or memory, which are simpler and more tolerant of the defects that inevitably occur when employing such an immature technology.
In fact, given HGST's innovations, the first commercial hard drives based on nanolithography may be just a few years away. According to HGST vice president Currie Munce, the company expects the technology to become cost-effective by the end of the decade. ®
100x the density also means 100x the theoretical throughput.
Different technologies have different characteristics that are good in different cases. SSDs for example are very good at not damaging if dropped. They have very fast seek times but write speed and MTF is far less impressive.
I have no idea if hard disk will go the way of the zip drive or not, but even if no windows pc ships with a spinning disk it is a bit unimaginative to ignore the whole technology.
I suspect its not about being able to do it, it's about being able to do it at a price that's competetive with current storage methods, look how long its takes SSD's to get a foothold on a market full of spinning bits of metal, to actually succeed it needs to be purchased by your average joe in your average computer.
Re: are just scratching the surface
There's a very good reason why multiple read/write heads are not available on HDDs (and it's been tried) and that is because the vibration introduced by moving one read/write head disrupts the others. I'd also imagine that given that the heads "fly" incredibly close to the surface of the disc, that aerodynamic interference could also be an issue.
Multiple read/write heads were used in the dim and distance past on a type of disk that used fixed heads (rather like an alternative to drums). These were used as paging store on mainframes back in the 1970s, but were inherently very expensive and had low capacity - even compared with moving head drives of the same era. I have a vague recollection that ICL's ill-fated CAFS (content addressable file system) of the 1970s made use of multiple read heads. It used logic at the disk head controller level to perform searches on data content, but improvements in processor speed meant it was a commercial failure.
(nb. the integration of search logic into disk controllers was once commonplace in the form of CKD - count-key-data drives which could embed certain searchable data into key fields before every data block. Typically this was used for things such as index data for indexed sequential files, and the programmes to search for such data could be despatched against a channel controller using a very limited and special "channel program". To this day, IBM mainframe disk controllers have to emulate this function as "legacy" access methods require it. The norm used to be that programs did not access storage by going through a file abstraction layer, but that they assembled the channel program directly as this saved CPU cycles. This still happens on "legacy" programs,. but the O/S has long had a role in "vetting" the channel programs for security reasons.
CKD techniques have long been replaced by software and logical block addressing, but the traces still remain)...