Mega Euro storage show: Players talk tech on objects, tape and flash
Tape vendors seem perky... maybe TOO perky
SNW Europe At StorageNetworkWorld (SNW) Europe, branded as Powering The Cloud, we were bombarded with information from a multitude of vendors. As any backup and storage player in the game would know, the best way to deal with a mass of data is to break it down into chunks, so El Reg will do just that. This article is the first in a series about the event dealing with all the lowdown from Amplidata, Scality, SpectraLogic, a post-IBM-acquisition TMS and SMART.
SpectraLogic is supporting 10GbitE to tap into demand from its customers for iSCSI access to its T-Series tape libraries. This uses a Bridgeworks 10GbitE-to-Fibre Channel bridge, which is available for purchase through Bridgeworks, housed in a 1U 19-inch rack mount enclosure, and offering:
- IP network connectivity via 2 auto-sensing 10GbitE ports
- FC network connectivity via 2 auto-sensing 8 Gbit/s FC ports
- Integrated power supplies and cooling
- Up to 16,000 initiators
- Non-blocking cache
- Error recovery level 2
- iSNS, jumbo packet support, multi-path I/O and GUI interface
The tape archive situation
Tape library vendors are getting optimistic. They have seen high-capacity optical disk storage come and go, with holography utterly failing to deliver on its promised archival viability. They are seeing the cloud service providers gradually realise that they cannot store archive data on disk; it will become just too expensive. They are thinking and hoping that the cloud service provider archive market is coming to them. The tape library vendors also think that disk array vendors generally are of the same mind. They too realise too the massive archive nail cannot be satisfactorily hit by a disk array hammer. It won't work. They must have tape; that's why EMC has cut a deal with SpectraLogic.
But, irritatingly, a fresh challenge to tape-as-archival-storage is evident; object storage. For example, the role of Amplidata object storage in Quantum's StorNext product set is as an alternative to tape, not as a tier between file or block storage and tape.
In general object storage suppliers see no role for tape in the object storage world. They provide a single logical object storage tier with no concept of an archival object tier on tape, were that even possible. The El Reg storage desk would like to suggest that you can't scale an object storage repository indefinitely, that objects, like files and blocks, have a lifecycle and with older objects exhibiting lower and lower access rates as they age. Therefore, why not move them off to a different - and cheaper - storage medium, like tape? But the object storage industry is in the full flush of scalability and erasure coding-driven youth. Questions about tape's role in object storage are met with almost-disguised incredulity.
Two steps forward with optical disk archives going away and cloud folks hopefully seeing tape as the best archive medium; one step back as pushy exuberant object storage vendors hope to unreel tape from the big data markets they want to push into.
Amplidata had its object storage performance tested by Howard Marks at DeepStorage.net. The company states that, according to the DeepStorage report, its AmpliStor object storage product delivered up to 3GB/sec of read and write throughput in a single rack system. The system, configured with 3TB SATA disks, delivered nearly linear scaling as resources are added and maintained performance throughout the test.
It says: "Today’s object storage systems have evolved to address the storage of large volumes of objects (files), where throughput and large scale durability are most important, rather than small sub-file updates which have been the performance characteristic of traditional NAS and SAN systems." The AmpliStor product can cache objects in flash but the test didn't bother with this and relied just on the disk spindles.
The test system included 24 Amplidata AS36 storage nodes and three AmpliStor controllers, each configured with dual Intel E5-2650 processors, 64GB RAM and dual 10GbitE network interfaces. Amplidata said: "During the test, even with one single controller, the system would routinely deliver performance of over 950MB/s (nearly 1GB per second) across a wide range of object sizes when multiple streams of data were running in parallel. In addition, performance scaled near linearly until the network connecting the controllers to the storage nodes was saturated."
Howard Marks, chief scientist and DeepStorage.net founder, was quoted in the Amplidata release: "The BitSpread ... erasure coding technology allowed us to read data even in the event of four complete node failures, while performing better than other object storage systems that use object replication for data protection. We saw over 320MB/sec single stream throughput when reading a single 1GB object which is more than twice what such a system could deliver reading the object from a single SATA drive. This is genuinely remarkable performance.”
Another analyst, Robin Harris, of StorageMojo, said: “I’ve watched erasure coding technologies such as Amplidata’s advance over the years and believe it holds tremendous potential to rewrite data centre economics in a major way.”
This is all good stuff but AmpliStor may not be as fast as a RAID array with hardware RAID. Amplidata's erasure coding is software processed by the Intel CPUs and not hardware. If we ever see hardware erasure coding then performance would surely increase.
After Scality's performance test with ESG - background here - this is the second object storage performance report. Amplidata CTO and co-founder Wim De Wispelaere said: "The Scality ESG test was IOPS and not throughput. We're doing throughput through the backend."
The Scality test focused on IOs per second (IOPS) whereas this one looked at IO bandwidth; a different emphasis. The Amplidata DeepStorage.net report is said to be available online but it's not on the Amplidata or DeepStorage sites yet.
Sponsored: 2016 Cyberthreat defense report