Original URL: http://www.theregister.co.uk/2013/09/09/feature_history_of_enterprise_storage/

Enterprise storage: A history of paper, rust and flash silicon

Has anything really changed since the punch card era?

By Chris Mellor

Posted in Storage, 9th September 2013 10:40 GMT

The story of data storage is one of ever decreasing circles. What started as holes punched into cards, then into tape, became hard disks, floppy disks, then hard shiny disks, until eventually circles are no longer involved at all.

It is also the story of transitions from one medium to another as the IT industry searched for ways to hold data and deliver access to it fast enough to keep processors busy.

Here's a rough timeline, showing the overlap between key technologies.

Storage is required by a computer because its memory loses all its data when it's switched off; DRAM is volatile. When a computer is first switched on its memory is empty and both code, which tells the computer what to do, and data, the information that is processed, needs loading into memory from a persistent - non-volatile - store.

The punched card era

Punched cards, the original computer storage device, were first introduced around the 1930s and survived right through to the 1980s. They were rectangular cards with 80 columns and 12 rows. Each column represented one digital signal or character formed from the presence or absence of a hole. IBM used 10 rows for data while the upper two rows per column were zone rows for holding other information. The cards were passed through a reader which shone lights at each column. If there was a hole, the light passed through and was detected.

An IBM 711 punched card reader could read 150 cards a minute. With 72 x 10 bits per card that meant 150 x 720 bits/minute could be read - 108Kbits/min, which was quick, at the time.

The 711's control panel wiring was used to read the punched card data into electrostatic memory. Typically a reader or readers would be attached to a host system such as a System/360 mainframe.

Various coding schemes were created, such as 6-bit BCDIC (Binary Coded Decimal Interchange Code) and EBCDIC (Extended BCDIC), so that groups of holes in a column could represent signed digits and then alphabet characters.

Punched card

Punched card? That'll do nicely

IBM's card code formats were adopted by the other mainframe computer suppliers of the era.

As computers grew in capacity, storing the cards themselves and the speed at which they could be read and written to became an issue, not to mention the problem of dropping the boxes that held them and even spilling coffee on them. Paper tape reels became an alternative, especially for the first minicomputers such as Digital Equipment's PDP-8.

The paper tape rollout

Paper tape has a continuous sequence of rows and columns along its length, with the presence or absence of punched circular holes at each row/column intersection signalling a binary value. Paper tape came to minicomputers from its use in teleprinters, with data encoded using the ASCII scheme - American Standard Code for Information Interchange.

It was stored on reels or in fanfold form in boxes, streamed through optical readers and written with tape punch machines. When the storage medium is streamed through a reader the system has to know where it is on the paper tape so that it knows what it's reading. That means the tape has to move precisely under the head so that it reads the data tracks and doesn't miss anything. Sprocket holes made sure the tape was always in the right place.

Paper tape

Digital paper tape

A Digital Equipment DEC Type PC09C paper tape reader read in data at 300 characters per second, roughly equivalent to 18KBytes/minute, compared with the punched card speed above of 108Kbits/minute,

Paper tape was smaller and more convenient than punched cards but it could rip. And as minicomputers grew in capacity, more tape was needed to store bigger programmes and more data - PDP-8 giving way to 16-bit PDP-12, which gave way to the 32-bit VAX in Digital's product range.

Each increase in the bit-size increased the computer's addressing capability, meaning bigger computer programs and more data could fit into the system's memory. The first VAX, the 11/780 system, had a 4GB address range, much larger than the PDP 11's 64KB. The 12-bit PDP-8 has a 4KB main memory size.

Paper tape speed was inadequate for these larger memory computers and a newer, faster technology was needed - and so digital magnetic tape was born.

Tape develops a magnetic allure

Invented as a sound-recording medium by the German Fritz Pfleumer in 1928, magnetic tape was first used as a digital storage medium on the UNIVAC 1 computer in 1951.

Magnetic tape still has a continuous set of rows along its length, now called tracks, with a sequence of intersecting columns. Digital (binary) signals are still located at row/column intersections but are now recorded using a magnetic field's direction; north or south; plus or minus; one or zero. The recording density is vastly superior to paper tape and punched cards; 128 characters per inch using eight tracks on the UNIVAC 1.

The tape was stored on circular reels and read by a head as it was passed from one reel to another.

Digital tape

A 10.5-inch, 9-track tape reel

Magnetic tape provided important new features. As the size of the stored signals, bits, was progressively reduced, more data could be stored in the same area to give increased areal density. What is more, magnetic tape could pass through the read-heads quicker than paper tape, thereby increasing data access speed.

The processes of reading and writing data could also be carried out in a single head unit. This sped up data-writing so that computers could output information to storage very much faster, making it possible to back up data to tape for long-term storage.

At the same time, the mechanical elements of a tape drive could also be progressively shrunk so that the original floor-standing units gave way to drives mounted on the front of computers.

A huge number of different tape sizes and formats were developed for mainframe computers and minicomputers, wiping out paper tape. However, data access speed was limited by the fact that a tape drive only has a single read/write head. To get at any individual file on the tape that file had to be located and passed by the read/write head; the tape is fed sequentially past the read/write head.

Solving this problem created the first example of networked or shared storage. A group of drives and racks of tape reels were collected together in tape libraries, shared by several computers, leading to the glory days of SAN (Storage Area Networks) and NAS (Network-attached Storage) that we are still enjoying today.

StreamLine 8500

Oracle Streamline tape library

But it wasn't all good news. With tape, error checking and correction became more important as binary signals could be degraded, and various schemes were devised to provide binary value certainty or error presence, necessitating data replacement. Such schemes became progressively more sophisticated as the bit area became smaller and more prone to error.

Tape technology advanced at a steady rate, increasing data capacity and data transfer speed. The accompanying chart shows how the LTO format increased capacity and speed through generations, from LTO-1 in 2000 to LTO-6 at the end of last year:

LTO Generations capacity and speed

As computer processors became faster and faster, the problem of how long it took to locate or write data on tape became more of a problem. But this mismatch between processor speed and data storage speed has been a constant factor in computer history and helped spur every major storage technology development.

Each time an advance in computer data storage technology solves the pressing problems of the preceding storage technology, fresh problems come along that lead in turn to its replacement. So it was with tape, when IBM thought to flatten it and wrap it around itself in a set of concentric tracks on the surface of a disk.

Disk sends storage into a spin

IBM announced the 350 RAMAC (Random Access Method of Accounting and Control) disk system in 1956. Having a flat disk surface as the recording medium was revolutionary but it needed another phenomenally important idea for it to be practical. Instead of having fixed read/write heads and moving tape as with tape drives, we would have moving read/write heads and spinning disk surfaces.

IBM 350 RAMAC

The IBM 350 RAMAC disk system

Suddenly, it was much quicker to access data. Instead of waiting for a tape to sequentially stream past its head to the desired data location, a head could be moved to the right track. This enabled fast access to any piece of data; data selected, as it were, at random. And so we arrived at random access.

RAMAC had two read/write heads which were moved up or down the stack to select a disk and then in and out to select a track. Disk technology then evolved to having read/write heads per recording surface, eliminating the disk platter seek time latency. Multiple moving read-write heads enabled random data access and this solved the tape I/O wait problem at a stroke. It was an enormous advance in data storage technology and getting a disk head to move across a disk surface while being super-close to it has been compared with flying a jumbo jet a few feet above the ground without crashing.

RAMAC provided 5MB of capacity using 6-bit characters and 50 x 24-inch disks, with both sides being recording surfaces. Disk drives rapidly became physically smaller and areal density increased so that, today, we have 4TB 4-platter drives in an enclosure roughly the size of four CD cases stacked one on top of the other.

A key development was the floppy disk, a single bendable platter held in an enclosure and inserted into a floppy disk drive; a throwback to tape days in that respect, but they were cheap, so cheap. Personal computer pioneers seized on them for data storage but they were but a stopgap because once the 3.5-inch hard disk drive format and bay and SCSI (Small Computer Systems Interface) access protocol were crafted PC and workstation owners wanted them for their superior data access speed, reliability and capacity.

Apple II with monitor and floppy-disk drives

Apple II with two floppy disk drives

The 3.5-inch format became supreme and is found in all computers today; mainframes, servers, workstations and desktop computers.

The incredible rise in PC ownership drove disk manufacturing expansion, supplemented by the rise of network disk storage arrays. Applications in servers were fooled into thinking they were accessing local, directly-attached disks, when in fact they were reading and writing data on a shared disk drive array.

These were accessed either with file protocols (filers or NAS) or as SAN arrays - raw disk blocks - by applications such as databases. Typically, SANs were accessed over dedicated Fibre Channel links, whereas filers were accessed over Ethernet, the Local Area Network (LAN).

Disk drive manufacturing became steadily more expensive as recording and read/write head technologies grew more complex. Those companies that were best at organising their component supply chains, building cost-effective and reliable products, managing their costs, and selling and marketing their products, were able to make profits and fund their operations.

The others became less profitable, fell into debt and collapsed or were taken over. At one time there were more than 200 disk drive makers. Now there are just three: Seagate, Toshiba and Western Digital/HGST.

Seagate ST-412 disk drive

Seagate's ST-412 disk driveE

The rise of disk drive arrays and their falling cost/GB meant that they could take over backup data storage duties, especially if repetitive data in backup data sets could be removed by deduplication technology. This was a devastating blow to the tape backup industry, leading to rapid format consolidation. Now there are effectively only two mainframe tape suppliers with proprietary data formats left, IBM and Oracle, and effectively just one server computer format, LTO. This is the Linear Tape Open format, now in its sixth LTO-6 generation. The LTO format is owned and developed by a three-member consortium; HP, IBM and Quantum.

While disks were loaded with more and more data and technology enabled them to be read and written to faster and faster, one huge problem remained. Disks could, and did, fail, causing all the stored data to be lost. The problem was avoided by making extra copies of data. The data was mathematically processed so that reduced amounts were needed, lowering data protection costs, with RAID technology - Redundant Arrays of Independent Drives. Different RAID schemes include RAID 0, 1, 2, 4, 5, 10, with each individual scheme optimised for data access speed, data protection surety, and capacity needs or combinations of these.

With RAID schemes the contents of a failed drive are rebuilt on another drive using the RAID data held on the surviving drives. But such RAID rebuilds took longer and longer as drive capacities grew.

hitahi t7k500 hard disk drive

HGST 7K500 3.5-in hard disk drive

As the number of drives in arrays grew to the hundreds and then past the thousand mark, it became necessary to protect against a second drive failure when recovery from a first drive failure was under way, and so the RAID 6 scheme was devised.

Disk drive growth rapidly increased the total amount of stored digital information and researchers reckoned digital storage capacity passed analogue capacity in 2002 and the rate of increase continues to accelerate.

IDC Digital Universe growth

IDC Digital Universe growth in Exabytes (IDC/EMC June 2011)

As the speed at which the disk ran was critical to how quickly data could be read or accessed, so the manufacturers pushed their technology to deliver ever faster rpms. Disk speeds continued to increase until they reached an effective limit of 15,000rpm. Any faster, and the disk platters risk flying apart under the centrifugal force and from vibration. Lower speed drives could store more data and may rotate at 5,400rpm. That meant that the time taken to read and write all the data to and from a drive went up as drive capacity increased, and this became more and more a limiting factor.

The problem is that computer processor speeds, impelled by Moore's Law, rose and rose and rose, leaving disk I/O speed far behind and utterly unable to catch up. A move to 2.5-inch drives meant that more could be put in a disk drive enclosure to increase the overall I/O rate, but not dramatically so.

This inability to keep up with host computer speeds provided the backdrop for what we are seeing now - the rise of solid-state NAND flash storage, with no moving heads, in fact no read/write heads at all.

NAND flash

Invented by Dr Fujio Masuoka at Toshiba around 1980, NAND flash is organised as cells in blocks which must be written to simultaneously. So it is not, strictly speaking, a random access device.

Flash Memory cell schematic

Schematic diagram of flash memory cell

One type of flash, NOR (named after the Not Or logic concept) is used in mobile phones and other devices as a form of read-only memory. NAND (Not And logic) is used for data storage in cameras, USB sticks, tablet computers, notebooks and a vast range of other devices needing tiny, low-power and persistent data storage with faster access than disk.

A semiconductor technology, with no mechanical parts, it is considerably smaller than disk at equivalent storage amounts. A 1TB USB stick fits in the palm of your hand and is much lighter than the equivalent 4-CD case stack sized 3.5-in hard disk drive.

A quick look at optical storage

CDs and DVDs have also been used for archiving data storage but their widespread use has been held back by their inherent disadvantages. They enjoy relatively low capacity and slow write-speeds compared with hard disk drives and tape is cheaper than both disk and CD/DVD optical for storing bulk data. Disk is also faster for writing and reading backup data. All of which means that optical disks are only found today in niche markets.

Meanwhile, there have been persistent attempts to develop holographic storage technology as a way of providing archival duration and high-capacity optical technology. They have all failed. So much so that it's now feasible to consider archival storage using very cheap flash memory, TLC flash.

All types of flash

Flash comes in three varieties. Single layer cell (SLC) flash stores one bit per cell and is the fastest and most expensive form of flash. Its problem is that it has a limited number of Program/Erase (P/E) cycles before it dies.

MLC Flash endurance

Flash with 2 bits per cell is called Multi-layer cell or MLC flash. It has a higher capacity than SLC flash, but is slower, and although cheaper, has a shorter working life. Three bit flash is called TLC (Triple Layer Cell) is cheaper still, slower and shorter-lived than MLC flash, as the graphic shows, and it's used in digital cameras and similar devices where a P/E cycle limit of 2,500 or so is acceptable.

While flash endurance can be lengthened by over-provisioning; providing spare unused cells to replace worn-out ones, there is another problem. The only way to increase flash's areal density is to shrink the cell size. But as we do this, moving from a 40nm-class cell size (49-40nm range), to 30nm class, 20nm class and below, the flash cell's endurance also diminishes. Today's leading-edge flash is 16nm. There may be one smaller iteration but then some form of stacking NAND dies, one atop another, will be needed, and Samsung has already announced its 3D V-NAND drive.

Flash memory die

Flash memory for desktop and server computers is mostly packaged in disk drive, bay-sized cases - 2.5-in typically - and called a Solid State Drive or SSD. It uses disk-derived access interfaces, SATA or SAS.

Computers using flash memory can access data in microseconds whereas data access with disk drives needs milliseconds because the disk's head has to be moved to the right track. As a result, computers using flash can run applications faster, supporting more virtual machines in virtualised servers.

Another way of providing flash is to connect it directly to the internal PCIe bus, this providing even faster access to its data as no disk I/O conversion is needed. Such PCIe flash card memory is rapidly growing in popularity with 16TB coming next year from Micron.

SSDs or flash built into cards, such as Violin Memory's VIMMs, can be used to create networked all-flash arrays which are faster-reacting than hard disk drive arrays, need approximately a tenth or less of the physical space, and a tenth of the power. Deduplication can be used to increase their effective capacity and provide a cost/GB of stored data comparable to or better than disk drive arrays.

Disk drive array manufacturers are now using SSDs to store the most active data, as well as flash caches in their controllers to speed data access. It is likely that most primary data currently stored in disk drive arrays will move over time to all-flash arrays.

Disk drives will be used to store secondary or nearline data. Hybrid arrays use flash for primary data and disk for secondary data, offering a halfway house between all-flash arrays and all-disk arrays, being less expensive than the former and faster than the latter while still having disk-levels of capacity.

Over time flash technology will run out of steam and a replacement will be needed. Currently, either Phase-Change Memory or Resistive RAM are seen as the leading candidates, both promising DRAM-like speed, full random access and non-volatility.

Today, the space taken by a hole in a paper tape signifying a single bit can now hold megabits or even gigabits. Storage used to be a technology needing fractions of an inch or centimeter to store visible bits. Now it uses nanometres to store invisible bits accessed at speeds so fast we can barely comprehend them.

Storage needs to carry on getting faster, smaller, cheaper, and more reliable if we are to carry on advancing our use of computers. Truly we have been in, and are in, a store-age as much as a computer age. ®

Bootnote

We have concentrated only on developments in the IT storage industry and not looked at technology developments outside IT that were adopted and taken on board – such as punched cards used in Jaquard looms before their use in computing, paper tape in teletypes and so forth.