Original URL: https://www.theregister.co.uk/2012/10/31/snw_europe_1/
Mega Euro storage show: Players talk tech on objects, tape and flash
Tape vendors seem perky... maybe TOO perky
SNW Europe At StorageNetworkWorld (SNW) Europe, branded as Powering The Cloud, we were bombarded with information from a multitude of vendors. As any backup and storage player in the game would know, the best way to deal with a mass of data is to break it down into chunks, so El Reg will do just that. This article is the first in a series about the event dealing with all the lowdown from Amplidata, Scality, SpectraLogic, a post-IBM-acquisition TMS and SMART.
SpectraLogic is supporting 10GbitE to tap into demand from its customers for iSCSI access to its T-Series tape libraries. This uses a Bridgeworks 10GbitE-to-Fibre Channel bridge, which is available for purchase through Bridgeworks, housed in a 1U 19-inch rack mount enclosure, and offering:
- IP network connectivity via 2 auto-sensing 10GbitE ports
- FC network connectivity via 2 auto-sensing 8 Gbit/s FC ports
- Integrated power supplies and cooling
- Up to 16,000 initiators
- Non-blocking cache
- Error recovery level 2
- iSNS, jumbo packet support, multi-path I/O and GUI interface
The tape archive situation
Tape library vendors are getting optimistic. They have seen high-capacity optical disk storage come and go, with holography utterly failing to deliver on its promised archival viability. They are seeing the cloud service providers gradually realise that they cannot store archive data on disk; it will become just too expensive. They are thinking and hoping that the cloud service provider archive market is coming to them. The tape library vendors also think that disk array vendors generally are of the same mind. They too realise too the massive archive nail cannot be satisfactorily hit by a disk array hammer. It won't work. They must have tape; that's why EMC has cut a deal with SpectraLogic.
But, irritatingly, a fresh challenge to tape-as-archival-storage is evident; object storage. For example, the role of Amplidata object storage in Quantum's StorNext product set is as an alternative to tape, not as a tier between file or block storage and tape.
In general object storage suppliers see no role for tape in the object storage world. They provide a single logical object storage tier with no concept of an archival object tier on tape, were that even possible. The El Reg storage desk would like to suggest that you can't scale an object storage repository indefinitely, that objects, like files and blocks, have a lifecycle and with older objects exhibiting lower and lower access rates as they age. Therefore, why not move them off to a different - and cheaper - storage medium, like tape? But the object storage industry is in the full flush of scalability and erasure coding-driven youth. Questions about tape's role in object storage are met with almost-disguised incredulity.
Two steps forward with optical disk archives going away and cloud folks hopefully seeing tape as the best archive medium; one step back as pushy exuberant object storage vendors hope to unreel tape from the big data markets they want to push into.
Amplidata had its object storage performance tested by Howard Marks at DeepStorage.net. The company states that, according to the DeepStorage report, its AmpliStor object storage product delivered up to 3GB/sec of read and write throughput in a single rack system. The system, configured with 3TB SATA disks, delivered nearly linear scaling as resources are added and maintained performance throughout the test.
It says: "Today’s object storage systems have evolved to address the storage of large volumes of objects (files), where throughput and large scale durability are most important, rather than small sub-file updates which have been the performance characteristic of traditional NAS and SAN systems." The AmpliStor product can cache objects in flash but the test didn't bother with this and relied just on the disk spindles.
The test system included 24 Amplidata AS36 storage nodes and three AmpliStor controllers, each configured with dual Intel E5-2650 processors, 64GB RAM and dual 10GbitE network interfaces. Amplidata said: "During the test, even with one single controller, the system would routinely deliver performance of over 950MB/s (nearly 1GB per second) across a wide range of object sizes when multiple streams of data were running in parallel. In addition, performance scaled near linearly until the network connecting the controllers to the storage nodes was saturated."
Howard Marks, chief scientist and DeepStorage.net founder, was quoted in the Amplidata release: "The BitSpread ... erasure coding technology allowed us to read data even in the event of four complete node failures, while performing better than other object storage systems that use object replication for data protection. We saw over 320MB/sec single stream throughput when reading a single 1GB object which is more than twice what such a system could deliver reading the object from a single SATA drive. This is genuinely remarkable performance.”
Another analyst, Robin Harris, of StorageMojo, said: “I’ve watched erasure coding technologies such as Amplidata’s advance over the years and believe it holds tremendous potential to rewrite data centre economics in a major way.”
This is all good stuff but AmpliStor may not be as fast as a RAID array with hardware RAID. Amplidata's erasure coding is software processed by the Intel CPUs and not hardware. If we ever see hardware erasure coding then performance would surely increase.
After Scality's performance test with ESG - background here - this is the second object storage performance report. Amplidata CTO and co-founder Wim De Wispelaere said: "The Scality ESG test was IOPS and not throughput. We're doing throughput through the backend."
The Scality test focused on IOs per second (IOPS) whereas this one looked at IO bandwidth; a different emphasis. The Amplidata DeepStorage.net report is said to be available online but it's not on the Amplidata or DeepStorage sites yet.
Here's a thing: object storage can do primary data storage. Scality is supplying its Ring object storage to consumer mail service suppliers such as Time Warner (20 million email users), Comcast (30 million users) and Libero (10 million users). CEO Jerome Lecat thinks Google, Yahoo, Microsoft and AOL provide something like 30 per cent, maybe more of consumer email services and use their own SW and storage.
The rest of market, the bulk, is represented by service providers using third-party software like OpenWave (TimeWarner Cable), Zimbra (Comcast) and Critical Path (Libero). The Scality Ring has been certified as a storage medium for these third-party SW products. We understand Time Warner Cable only buys Scality Ring for its mail storage now, and Comcast and Libero are transitioning to a Scality-only policy.
Openwave Messaging will bundle and resell Scality’s RING storage as part of its Universal Messaging Suite. Bill Webb, Time Warner Cable's VP for systems engineering said about this: “The Openwave Messaging and Scality [bundle] puts us 18 months ahead of the market.”
He believes the bundle will provide two things he must have: an always-on capability and the ability to support variable peak loads. Object storage used for a high-performance and critical app - who would have thought that possible?
The Scality Ring is, we understand, the only object storage used for primary storage as well as nearline and archive repositories, the more traditional object storage use cases. The Ring uses erasure coding to ensure no data is lost and has scalability attributes that mainstream block and file storage array products cannot match, according to Lecat.
Now we have two object storage products that are fast: the Ring for IOPS and the AmpliStor for GB/sec.
IBM's TMS, Texas Memory Systems as was, will become, we understand after talking to a person familiar with the situation, the flash storage division in IBM storage, and continue supplying its PCIe RamSan 70 and networked RamSan 700 and 800 series of products, with the bulk of these being Fibre Channel connected. It's anticipated that the integration of TMS inside IBM will be a little faster than that of XIV.
There are integration opportunities, particularly with IBM's Pure systems line of converged devices. The PCIe RamSans may well become a commodity product and it's in the shared flash arrays that greater flash storage value resides and where TMS's main focus is to be found. Thus the 700 and 800 series products will be enhanced with 16Gbit/s Fibre Channel being a likely addition, much we suppose to Emulex's delight.
There is not much demand for any server flash card software than caching. Our contact thought that take-up of added flash software capabilities like cut-through memory access (Fusion-io) was low. Caching software will migrate into hypervisors and the operating systems, becoming less attractive as a separate product.
We could see the Storwise V7000 products having TMS flash added to their controllers. much like NetApp's FlashCache. TMS flash could perhaps be used as storage memory, an adjunct to a server's DRAM, as a storage tier and a caching medium. Perhaps it might appear on the motherboard.
TMS flash already works with IBM's EasyTier, its automated data tiering and movement software and TMS flash could be integrated into IBM's storage infrastructure as an EasyTier storage tier. We can be fairly confident it will be. Such integration could be the incorporation of a TMS flash enclosure behind a storage array controller and in front of the disk shelves. Or, we think, it could be as a networked array between the servers and IBM networked disk drive arrays with EasyTier providing a single logical tiered environment and, perhaps, embracing PCIe TMS RamSan cards in servers; this is our speculation.
There is a clear potential for the integration of TMS flash with the DS8000.
Looking ahead, flash may only have five years before NAND geometry shrinks stop due to falling performance and lower lifetimes. Raw three-layer cell (TLC) flash can re-written about 500 times before it dies. It might even be said that TLC flash is dead in the water before it has even started as an enterprise storage medium - but we said that and our conversational partner did not disagree. TMS is actively looking at post-NAND technologies and acquisition by IBM brings Big Blue's Racetrack and phase-change memory technologies into play. We're reminded that TLC could make a great WORM (Write Once Read Many) storage medium though.
Our belief is that the IBM acquisition will start working its effect on TMS's products next year and we should see "interesting" announcements.
SMART (background here) has announced it is supporting 19nm flash, known as 1X NAND. Currently it uses 24nm NAND dies from Toshiba, and that's where the 19nm stuff will come. Inherently smaller geometry NAND is slower than a larger geometry and has a lower endurance, the number of P/E cycles (Program/Erase) it can support before wearing out.
Mike Lakovicz, SMART's VP for sales and marketing, and an experienced disk guy, says that the company's Guardian flash controller software has many algorithms, including signal processing ones, like those of Anobit, the flash controller startup acquired by Apple. Because of this it can enhance consumer grade 24nm MAND's endurance 14 times and, Lakovicz said, can be extended further to deliver a 50X raw NAND endurance improvement.
We should expect SMART 19nm Optimus brand NAND products to start appearing in the first quarter of 2013. The current Optimus MLC NAND performance levels - 100,000 random read IOPS, 50,000 random writes, and 500MB/sec sequential read and write bandwidth - should be attained with the new products. There will be value versions of them using a SATA interface and a performance line using SAS.
The base 24nm NAND PE cycle number is about 3,000. SMART is still evaluating the PE cycle rating for 19nm product but it could be down at the 2,000 level. A 14X improvement would take that to 28,000, lower than the 24nm Optimus' 40,000 PE cycle rating. But, Lakovicz said, over-provisioning could extend endurance up to current Optimus levels aided by, we suppose, some Guardian algorithm tweaking.
X-IO has selected the current Optimus SAS drive as the flash storage inside its Hyper ISE 7-Series storage product by the way - a nice OEM win.
Regarding TLC flash, Lakovicz pointed out that applying Guardian's anticipated 50X raw PE cycle enhancement to its raw 500 PE cycle number would deliver 25,000 PE cycles; a useful enough rating for potential use cases. This contrasts with the more pessimistic TMS view of TLC flash described above.
Overall this look at some SNW Europe product trends shows tape edging forwards into iSCSI, but flash and object storage powering ahead. It's ironic that, as they do, the tape industry is involved in an Active Archive Alliance effort to raise awareness of tape's archive capability. Tape does not have the wow factor of flash and objects. Could tape get it, become exciting again? LTFS will help. Read about DDFS from BridgeSTOR tomorrow. That could help too. ®