Tracking the history of magnetic tape: A game of noughts and crosses
Part Two: 'Tis a spool's paradise, t' be surrrre
Feature America began its love affair with tape following WWII, when Jack Mullin, serving in the US Army Signal Corps, dropped in on German radio broadcaster Bad Nauheim and returned home with two portable Magnetophons and 50 reels of tape.
News of his 1947 Hollywood equipment demos reached entertainer Bing Crosby who recognised the potential of being able to record high fidelity content for broadcast without the hassle of scheduled appearances.
Bing Crosby Ampext tape recorder advertisement from 1949
At the time the majority of US radio was live, as playback of relatively poor sonic quality discs was deliberately limited, but Mullin’s customised Magnetophone recorders won the day. As a consequence, Mullin became Crosby’s chief engineer and the star put in a $50,000 order with Ampex to build new models based on Mullin’s modified Magnetophones.
The Ampex Model 200A was the result that went on sale in April 1948, recording on the newly developed 3M Scotch 111 tape. Bing Crosby gave Les Paul an early production model and the musician went on to invent sound-on-sound recording. Paul’s pioneering work allowed him to listen to previously recorded parts while simultaneously adding new performances. Based on Paul’s innovation, Ampex began producing three-track recorders.
With the birth of analogue multitrack recording the music industry would take care of itself for decades, increasing the track count before eventually going digital, like everything else.
Jack Mullin [PDF] had other ideas for those early Ampex recorders, though, and pioneered their adoption in data and instrumentation recording in military facilities with modified Model 300 machines back in 1949.
Keep the noise down
Tape itself developed in a number of ways. The wider each track was, the better the signal to noise ratio. A 2-inch tape with 16 tracks is going to perform much better than a 1-inch tape with the same track count; you just get more tape area to magnetise per track. The space between tracks is critical too, as crosstalk can result if they’re too close together.
Track assignments aside, tape had other problems to do with handling in the manufacturing process as well as how it gathered on a reel. Back-coating treatments not only helped the operator to recognise the recording side of the tape, but were also developed to reduce static, friction and deliver more even tape packing when spooled.
Another consequence of backcoating was it made the tape slightly thicker which could attenuate the problem of print-through. Being on a reel, recorded signals can magnetise the layers it wraps around, which can be heard as a very low level “ghost” signal. To minimise this effect, it became good practice with recording engineers to store reels as “tails out” – namely, winding the tape forward to the end on the take-up reel before removal.
The upshot of this is that any print through effect would be heard later, like an echo, which is more musically tolerable. When stored “tails in” (wound to the beginning) if print-through occurred a pre-echo would be heard. If you’ve a record collection from the pre-digital era you’ll almost certainly find a track or three presenting this problem. You can read a more technical exploration of this phenomena here [PDF].
Other issues, such as tape tension, could impact on performance and the actual physical head alignment (azimuth) would need attention to avoid phase errors. There’s a whole science attached to these factors which come into play (excuse the pun) during analogue tape transfers for remastering and restoration.
Although digital test equipment can readily reveal inaccuracies, forensic work would sometimes rely on ferrofluids and a 10x loupe to examine the path of the track(s) on magnetic tape. The FBI has an interesting article on its methods here.
As an AES member back in the day, I recall with some amazement reading the lengths the FBI would go to in its enhancement techniques in Bruce E. Koenig’s 1988 paper Enhancement of Forensic Audio Recordings. Since then, digital audio era revisions have been added but if you fancy digging deeper into this specialist area of data recovery you'll find some of the general issues discussed in this 2005 AES paper discussing analysis techniques [PDF] from Audio Forensic Centre labs
The Digital Era
The commercial use of magnetic tape for data recording began with the Remington Rand UNIVAC computer in 1951 which had its own tape based data recording system called the Uniservo, designed by J Presper Eckert and John Mauchly This digital data recorder relied on 1/2-inch Vicalloy (nickel-iron plated, phosphor-bronze) metal tapes that were 0.003-inch thick. At the time, this media was considered the most reliable, although it was prone to wearing the heads.
UNIVAC infomercial: wind on to 5mins 35secs for the tape references
The workaround was to introduce a separate feed of plain Mylar (polyester) tape to sit between the head and the metal tape – cushioning it from abrasion. This Mylar arrangement had its own loop with guides and tension arms. A glance at the Uniservo suggests the tape path is hideously complicated – indeed, the lack of vacuum buffering (enabling speedy tape start/stops) does involve use of spring tensioning – but part of this is also the Mylar loop mechanism. The whole Uniservo I assembly is detailed here [PDF].
Heavy metal recordings
The original 8-inch reels not only contained a heavy 1200ft of metal tape, but the aluminium spool flanges were pretty hefty too, with a combined weight of 25lbs. The Uniservo would transport the tape at 100ips (inches per second) and each reel could store 1.5 million characters (224kB) with up to 128 characters per inch (CPI).
The Uniservo had an eight-track head array – six for data, one for parity and a timing channel. Although this arrangement could theoretically handle 12,800 characters per second, in reality the transfer rate was around 7,200cps when allowances are made for the inter-record gap (IRG) – the physical spacing on the tape separating the data recording.
UNIVAC data tape was metal to begin with but nevertheless beat punched cards and paper tape
The data was recorded in fixed sized blocks of 60 words, each with 12 characters. Up to 10 Uniservos could operate together on a Univac I and could function independently, rewinding on some while recording on others. Phase encoding (PE) was used to record the signal to tape which had to be written with the transport going forwards, but could be read in either direction. The use of ferric oxide tapes began with the Univac IIa which could also accommodate the Vicalloy recordings.
In a rather thoughtful move for backwards compatibility, one of the UNIVAC’s selling points was its card-to-tape converter which could handle 240 cards per minute with a single tape capable of storing 20,000 cards.
IBM’s data-recording tech might not have been first on the scene but its appearance in 1952 brought with it several innovations including the use of ferric oxide magnetic tape for storage. The IBM 726 was used alongside the 701 Data Processing System. It was a seven-track format using six data tracks plus one for parity. The IBM 726 housed two tape transports that ran at 75ips each, notching up 100cpi and transfer rate of 7500cps.
Again, the format was a 1/2-inch tape on an 8-inch reel with IBM providing two reels of 1200ft and two 200ft reels of initially acetate-based tape. The coding system was NRZI (non-return to zero inverted often called “nur-zee”). The unit delivered a maximum recording capacity of 2.3MB and incorporated vacuum buffering to accommodate the fast start/stop recording and playback bursts. These dual vacuum columns would remain a feature on IBM tape drives up until 1984.
Data tape transports: Mechanical tension arms (left) and vacuum pumps (right) that use atmospheric pressure to retain the tape steadily
As the Lincoln Labs TX (Transistor Experimental project) evolved, the TX-2 appeared in 1958 and, according to its designer Wesley A Clark, its online tape storage relied on 3/4-inch tape on 14-inch reels that could hold 2,500ft lengths. Given the inertia such weighty reels could induce, when this data recorder changed direction, the noise was said to be deafening and block searching mode could shake the room as the tape sped along at 60mph.
Full details on this early format are sketchy, and Clark had other ideas that would lead to the TX-2 Tape System. He proposed a snapshot approach that would act much like the floppy disk would years later by providing a personal file storage on smaller, pocketable reels of tape. What was to become LINCtape (and later DECtape) was based around earlier efforts from his colleagues Dick Best and Tom Stockebrand. A paper entitled "A Computer-Integrated Rapid-Access Magnetic Tape System with Fixed Address" [PDF] details the workings of this prototype 10-track design, which is specified as using 1/2-inch tape and 10-inch reels.
LINC/DECtape used 3.5-inch reels containing 260ft of tape and could store around 400kB. Where this format differed from other transports was that it didn’t feature a capstan or a vacuum pump but relied solely on the reel motors.
LINC tape drive undergoing restoration – more details of the project here
Photo courtesy of Herb Johnson
It was a considerable cost saving over the four motors found in a vacuum column deck. The reels had large hubs with not much tape on them, so the diameter was almost constant. The speed, although not constant, didn't vary hugely – about 10 per cent at most. The forward motor determined the speed and the other motor provided the tension.
The tape was 50 per cent wider than IBM’s at 3/4 inches and the recording density was quite low for robustness. To prove this point, famously, there was a demo that featured an ashtray being emptied onto the tape transport whilst in use. The Linc tape was built using Ken Olsen's DEC logic modules, so it was a natural for it to become DECTape.
Fifties musicAlso in 1958, RCA dreamed up its Sound Tape, Magazine Loading Cartridge. In short, a massive two-sided cassette that ran 1/4-inch tape at 3.75ips, 30 minutes per side for stereo recordings. However, it was possible to switch between mono tracks to increase the recording time to two hours.
But this was not the only cassette or cartridge device of the era. The Cassette Recorder Museum site put together by an enthusiast lists a number of audio examples and has a great gallery of period devices here.
RCA’s Sound Tape and the lesser-known CBS cartridge format would later be instrumental in the design decisions behind the Compact Cassette. El Reg has already covered its development in depth here and we also have an interview with the R&D team leader, Lou Ottens here.
An enthusiast shows off his 1959 RCA Sound Tape Cartridge Player
Joining the dots
As magnetic tape proved itself to be a more efficient storage medium than punched card or paper tape, the issue of data density became more and more of a fixation. Just how much could you get on a ribbon of rust? What were the limits to be able to reliably record and replay those data signals?
Compared to audio, the needs of data storage were radically different given the characteristics of the signals involved. Recording computer data didn’t need a particularly stellar dynamic range. Consequently, having a lower signal-to-noise ratio requirements meant that tracks and their respective guard bands could be narrower.
Sound recording in the late 1980s offers an easy comparison. Analogue audio 24-track machines required 2-inch tape, whereas the digital equivalent worked on 1/2-inch tape. The track count would later double to 48 on later digital recorders using the same 1/2-inch tape. Although digital tape recorders had many additional features, they were, nevertheless, all about storing data.
With data tape drives, you could also dispense with the additional complexity of bias circuitry used to enable a more linear response from tape for analogue sound recording. Non-linearity in magnetic tape recording wasn’t an issue for digital tape systems, however, attention would still need to be paid to the coercivity characteristics of the magnetic tape and the recording level calibrated to match.
When recording data, the current to the tape head is constant and recording is achieved by changing the direction of the current over time. Hence the tape is magnetised by changes in flux and which amount to transitions – in other words, bipolar recording.
Prior to recording, there is a channel-coding stage that converts the raw data into waveforms that, among other things, improve recording density and provide essential timing information. The recorded transitions that result from this process also avoid recording for long periods in one state unchanging state.
Digital data is not recorded on tape as binary waveforms but as signals indicating transitions
For computing, increasing tape speed allowed for higher data rates and these faster transports also enabled swifter access to data stored on reels containing half a mile of tape or more. As none of this was ever intended to played to a listener, much broader bandwidths could be used and were necessary in improving the data density.
By contrast, the improvements in audio recording would rely more on a low noise floor, and accurate capture and playback of varying signal amplitudes. The art of sound recording depended on being able to contain the dynamics of a performance within the limitations of the recording media. Hence the need to avoid overloading – that would cause distortion – and having a sufficient signal strength such that lower level content wouldn’t be lost in tape hiss.
Add to that, the signal processing necessary to produce an even frequency response – that could match or at least get close to lowest and highest pitched sounds audible to the human hearing – and you can see that there’s a lot to be considered when saturating a tape with audio content.
Naturally enough, compromises are inevitable and while audio systems for the sake of dictation, portability or consumer-friendly pricing could utilise tape recording formats that lacked the best fidelity, they could deliver other advantages such as slower tape speeds and hence lower tape costs.
Regardless of whether it was analogue audio or binary data, what was being tested was the capacity of magnetic tape to store information.
Commodore's Datasette recording system relied on the
Compact Cassette for storage needs
As transistors came on stream, the analogue world would benefit from smaller and cheaper components with lower power requirements and improved noise characteristics. Together, they would make possible developments such as the Compact Cassette – a compromise in audio fidelity, but so massively convenient, it dominated the consumer market for over 30 years.
The shortcomings of audio equipment has always been strikingly obvious to the listener, moreover, analogue audio recording errors are hard to mask and impossible to undo. However, there’s quite a bit of give and take in analogue audio recording. Momentary over-recording can be tolerated and there’s a sweet spot regarding tape saturation where pleasing harmonic distortion occurs.
Furthermore, background noise can be masked by louder sounds so even dubious recording systems can deliver listenable results, that is, until quieter passages come along or the song starts to fade. Noise, distortion and minor variations in playback speed (flutter) are undesirable elements and (in an audiophile world) are the equivalent of errors in data stream.
Still, data recordings were never meant to be heard by people, despite storage needs migrating to the Compact Cassette for micro computers in the late 1970s and early 1980s.
The only errors computer systems needed to worry about were whether it could accurately replay the pattern of transitions recorded onto tape. Indeed, in this domain, errors can’t be tolerated, and yet they can be accommodated. Data accuracy is paramount in computing systems – there is no give and take when it all goes wrong, just a data error message, if you’re lucky.
To make sure it doesn’t all go wrong, the multitrack recording heads not only allowed more information to be recorded across the width of a data tape but also featured a separate parity track reserved for check codes as part of error correction schemes, as mentioned earlier.
In 1964 IBM’s System/360 appeared with an increased track count from seven to nine to get more out of its half inch tape (eight for data and one for parity). Yet as the track count inevitably increased, so would the need to add more parity tracks to maintain the integrity of the data whilst recording and on playback.
IBM 18-track layout and parity tracks
Indeed, 20 years on, the IBM 3840 tape drive came on stream sporting 18-tracks, four of which were used as part of its adaptive cross parity error correction scheme. The interleaving of data across tracks was all part of the the 8/9 group coding system, a necessity to achieve an even higher data density on magnetic tape.
This is quite a convoluted topic which John Watkinson covers expertly in The Art of Data Recording and, lucky for us, he wrote a very accessible summary of these techniques recently for The Register here.
Another factor that separates the multitrack data recorder from its audio counterpart is that for data storage, all those tracks get recorded/replayed at once. With sound recording, independent track recording was required for capturing different takes and instruments when suited, rather than as one live multitrack performance.
IBM open reel tape and 3480 cartridge compared
From the 2.3MB maximum storage capacity on its initial 7-track 726 machines that appeared in 1952 climbing to 20MB on 10.5-inch reels on the early 2400 series tape machines in 1964, IBM would achieve 200MB two decades later on its 3480 tape drive featuring 1/2-inch tape in a new 4x5-inch cartridge.
The capacities for this format would peak at 2400MB on the 36-track 3490E IDRC (Improved Data Recording Capability) model that arrived in 1992.
The early 3840 drives were far from perfect even though spec was innovative, as they were the first tape drives to utilise chromium dioxide tape and thin film head technology.
Thin film head technology? It’s another example of the amazing the lengths that engineers have gone to in order to get a greater data density from those ribbons of rust and is explained in the box section below.
The 3840 also notched up an impressive data rate of 3MB/s – double that of competitors of the time – but as they’d been designed with IBM mainframe use in mind, the bus and tag interfacing didn’t suit everyone and, apparently, the dual drive configurations in which these systems were sold had a few issues with buffer underruns.
IBM 3410 open reel tape subsystem gets laced, the sort of attention that carts didn't need
Now, before we head off towards Ultrium utopia and into the StorageTek sunset with a stream of different cartridge formats behind us, let’s not forget that the music and computer industries weren’t the only customers for magnetic tape.
In cinemas, synchronising audio discs to film had given way to optical sound recording in the 1930s but magnetic tape promised higher fidelity and multitrack possibilities too. The early 1950s saw plenty of interest in the idea of repeating the wonders of audio on tape and producing an equivalent video recorder.
Television had either run live broadcasts directly from studio video cameras or did pretty much the same by pointing the camera at a movie projection – a form of telecine. Needless to say, Jack Mullin was on the case and so was Ampex. All the leading broadcasters were either backing a developer or working on systems of their own. There was only one winner and it wasn’t RCA’s Longitudinal Video Recorder (LVR) nor the BBC’s Vision Electronic Recording Apparatus (VERA).
How that particular game was fought out is discussed in part three which looks at the young Ray Dolby’s early role in video research and development as well as his later triumphs in both conventional audio recording and his surround sound cinema systems.
Conventional stationary ferrite head array
One of the most effective methods of maximising data density has been to increase the track count across the width of a tape. Originally, modified tape heads from the analogue sound recording era were utilised for this purpose, but the physical size of ferrite head imposed limits on what was achievable. The close spacing meant that magnetic interferences (mutual inductance) and crosstalk were issues that had to be compensated for.
Thin film heads can be produced photographically, and can be extremely small and are used for hard disk storage as well as tape systems. Thin film head magnetic circuits are constructed from layers on a substrate with minute spacings and a very high degree of accuracy.
Head designs of this kind have developed over the years with various inductive and magneto-resistive (MR) approaches delivering specific advantages for different applications. Used extensively in hard disk drives, magneto-resistive heads ignore polarity but measure the strength of the magnetic flux, however separate heads are required read and write operations due to differences in design for these tasks.
Revert to type
Meanwhile back at the office, IBM’s successful Selectric typewriter was getting kitted out with a tape drive. You can read our piece on the original Selectric from 1961 here.
IBM Selectric MT/ST typewriter and companion tape recorder
Announced in 1964 and in production by 1966, the Magnetic Tape Selectric Typewriter (MT/ST) was a very different beast though and featured a companion drive that took cartridges using 16mm tape. You could fit 25K of data on a 100ft cartridge that relied on a 7-bit data set and knock out documents at 150 words per minute.
A solenoid momentarily moved the head across the tape to record each keystroke, the tape winding happened as you typed – a character at a time. The backspace would simply wind back the tape to overwrite the character(s).
The recorded tapes could allow the typed document to be retyped automatically with justification and other formatting enhancements. By its very nature, as standalone system, the MT/ST was the original word processor.
Busy, busy: IBM Selectric MT/ST with tapes aplenty
The Quarter Inch Cartridge (QIC tape) format introduced by 3M in 1972 proved to be one of the most enduring data tape systems in computing. However, it didn’t remain the same throughout its lifetime, undergoing numerous revisions for the next 25 years. Details of these changes can be found the on the moribund QIC Tape Standards site here.
The original 3M DC300 cartridge had a 200kB capacity from 300ft of tape. As other formats emerged, the 4-track QIC-11 format managed a 20MB capacity on 450ft of tape. At the other end of the spectrum, the 144-track QIC-5210 DC delivered 25GB – the track format is described here [PDF] and the recording techniques utilised are detailed here [PDF].
The capacities inevitably increased with variations along the way. Tandberg’s Scalable Linear Format (SLR) was the label used to describe its own QIC tape system with uncompressed capacities up to 70GB. There were even wider 8mm QIC tapes which didn’t exactly live up to the acronym but offered improved data densities.
Lest we forget the Minicartridge, a smaller QIC variant which could be installed using floppy disk interfacing. Like a floppy they needed to be formatted too, which initially led to a lack of standardisation regarding interchangeability. Pre-formatted tapes adopting the XIMAT format would eventually follow and SCSI interfacing too.
3M would later introduce its Travan descendant of the Minicartridge. At 8mm, the tape was wider and both Travan and the Minicartridge lacked a separate read head that would enable verifying after recording.
Spin theoryIn part three, the background of rotary head recording is discussed, which has its roots in video recording systems. The idea of spinning the tape head rapidly across the width of a slowly moving tape, rather than having tape speed by a stationary head, offered a cost-effective way of handling high frequencies used in video.
Rotary head designs: left helical scan, right transverse tape recording. The longer tracks of helical enable more data to be recorded in a single pass, shorter tracks of transverse can result in recording more tracks to store the same data. However, wider tape can enable transverse recording to match helical scan. Transverse transports also benefit from being more compact and more robust in field use.
Despite being electronically complicated, this method of recording was also ideally suited for certain data applications. Digital audio used modified analogue video recorders for years and when Sony’s Rotary-head Digital Audio Tape (RDAT) format appeared with its 3.8mm tape, its convenience also saw the RDAT mechanism repurposed for data. Although most called it DAT, it was also known as DDS (Digital Data Store), the latter would encompass larger 8mm tape formats.
Indeed, Sony’s own 8mm Video8 tape format was another example of this usage swap around, as it formed the basis of Advanced Intelligent Tape (AIT). Again, it relied on rotary head recording and was the format used by Exabyte data storage systems. In 1996, AIT-1 initially had a storage capacity of 25GB but a decade later and changes to track pitch, AIT-5 notched up 400GB.
End of the line: Sony's AIT-5 tape line up
Sony’s Super-AIT (SAIT) abandoned the dual reels of DAT and AIT and used a single 1/2-inch reel instead but retained its rotary head recording system. Uncompressed SAIT-1 tapes could store 500GB. As a rival to the stationary head LTO Ultrium, the roadmap envisaged 10TB SAIT tapes. The AIT format was discontinued on 31 March 2010.
What's in store?
Tape being tape, the capacities of the numerous cartridge formats that appeared could vary just on how much was on the spool and was often misconstrued as an indication of the data density. Yet one thing was pretty clear, the convenience of and capacity of cartridges had put an end to the reign of the open reel tapes that began this spool’s paradise for data storage.
To get a flavour of the formats that have driven computers old and new, National Data Conversion – a company that handles shifting data from obsolete media – has a neat summary of bygone formats.
While not necessarily fashionable, tape’s usefulness for storage remains, what has changed are its applications. In the home and the office, the rise of the floppy disk delivered improved speed and convenience even if the capacity was a fraction of what tape could offer. Moreover, the falling prices and increased capacity of hard disk drives and now SSDs, has diminished tape’s visibility further still, as it retreated from the office and returned to its spiritual home, the data centre.
Here, tape’s robustness for long-term archiving and its mobility for off-site storage for disaster recovery scenarios has never been in question. However, the use of tape for long-term planning, has like any other digital format, raised issues of compatibility. Progress brings with it legacy formats to accommodate and while we have yet to see a complete consolidation of tape storage solutions, the options are certainly fewer than in years gone by.
One format with legs and no stranger to the data centre was Digital Linear Tape (DLT). Developed by DEC in 1984 and originally called CompacTape, DLT was bought up by Quantum a decade later and by 2006, the DLT-S4 format had notched up an 800GB capacity on 1/2-inch tape. A year later, in order to increase market share, Quantum shifted its focus to the Linear Tape Open (LTO), a tape format that emerged from a consortium originally consisting of HP, IBM and Seagate.
Over the years, many companies have offloaded their tape interests as the writing was on the balance sheet and there has been a decline of fortunes for tape. While necessary for some, this withdrawal may have been shortsighted given the emergence of new big data sources in recent years. It's no longer banks and insurance companies that need this sort of storage capacity and longevity. Big data's momentum has yet to be truly felt by the remaining tape vendors, but they probably do have reason to be optimistic.
The names HP, IBM and Quantum now grace the pages of LTO site along with format licensees Imation, Fujifilm, Maxell and TDK. Recently, TDK declared its exit from the data tape market by March 2014. By contrast, StorageTek, the tape system developed by Sun and now owned by Oracle, recently announced the T10000D tape drive with an uncompressed capacity of an 8.5TB on 1/2-inch tape.
The current LTO Ultrium 6 tape has a capacity of 2.5TB raw (6.5TB compressed), not the 3.2TB raw and 8TB compressed spec that was mooted earlier in its development. The roadmap for LTO 7 (6.4TB raw, 16TB compressed) and LTO 8 (12.8TB raw, 32TB compressed), will deliver substantial capacity increases, assuming there are no tweaks to the spec here either.
LTO Ultrium format roadmap 2012: still some way to go to match StorageTek's latest – click for a larger image
Even so, these promised enhancements are still some way off what StorageTek is offering today and IBM’s 4TB 3592 cartridges for its TS1140 drives. This mix is no bad thing. It demonstrates how tape as a storage medium has certainly not reached the end of the reel. Companies such as Fujifilm continue to develop new tape formulations with finer particles, new bonding agents, thinner tape (for better head contact) and innovative blends of magnetic material.
For the big number-crunchers there are the tape libraries too – robotic tape-switching loaders pioneered by IBM with the 3850 "honeycomb" first unveiled back in 1974.
Quantum Scalar 6000 tape library
Tape libraries have come a long way since then and can now be stuffed with tapes to total several hundred petabytes of data storage connected with 8GB/s fibre to a SAN environment. It maybe backroom but it's certainly high tech and El Reg recently covered these machinations in a feature on tape’s role in data centres and disaster recovery.
This loading, recording/reading, removing, reloading approach may seem a bit like a glorified jukebox, but tape libraries have enjoyed a reputation of high reliability. They also provide a convenient way of handling data for multiple clients for backup and archiving by simply using dedicated tapes for their respective storage needs. And some of those clients can be quite unusual.
Bastion of big data
In part one, I touched on my experiences whilst working at GCHQ several decades ago and its use of tape on instrumentation recorders. I’d sought permission to cover this particular topic and received a friendly go-ahead from GCHQ's Press Office.
I couldn’t resist following this up by asking if tape still played a role at the Government Communications Headquarters. After all, the areas of interest may have moved on from analogue signals to digital data, but surely this secret shrine to big data would have a use for tape storage. I requested an interview and although I was refused one, I did receive a reply and was told:
“The Department does use magnetic tape for backup and archive, and in order to overcome the issues of reading old tapes and old tape formats, the Department refreshes its tape storage periodically.”
As if you didn’t know already, when it comes to big data, old and new, the powers that be at GCHQ have got it taped.
In part three ("Video thrilled the radio star", which you can read here), the use of magnetic tape in entertainment is explored from the video-recorder pioneers to the gradual changes in cinema sound that were accelerated by the work of the late Ray Dolby, a name that has become a byword for audio excellence and in particular in relation to magnetic tape. ®
Special thanks to John Watkinson for permission to use diagrams from his book The Art of Data Recording.