SGI slips on Scality's RING, awaits flood of Big Data
InfiniteStorage hardware gets infinite storage software
Scality's object storage software is being OEMed by SGI so that it can gulp down galactic amounts of unstructured data into its storage arrays.
SGI has a Modular InfiniteStorage (MIS) array that was kicked into life a year ago and offered as a JBON or a NAS server supporting NFS and CIFS. Back then we wrote: "An SGI D-Rack can have up to 2.37PB of capacity and 40 processors using the MIS enclosures."
In April 2012 SGI made a deal with Nexenta and stuck a Nexenta ZFS-based NAS head on top of the MIS product - giving it dedupe and other Nexenta goodness. Now SGI is going after the object storage market and has decided to add Scality's RING software to give it access to the storage of the humungous amounts of unstructured data that are supposed to be heading everybody's way.
Interestingly, start-up Scality has added a filesystem to the RING and is planning to add NFS access, and that's more or less due now, with Scality's website stating its Scale-out File System (SoFS) "will support NFS v3 very soon and will enable the coexistence of native file access and NFS from different clients". Watch out Nexenta.
SGI and Scality say that a RING MIS system can smoothly and quickly swallow unstructured data and expand and expand with no rip 'n' replace upgrades or data migration needed because capacity limits are reached at some point. The combined system can scale to billions of files serving millions of users, using standard 19-inch racks holding up to 3PB of data with today's drives. Imagine Hitachi GST 7-platter helium-filled drives in there and we could envisage 6PB racks. The two say that their system is self-healing and SPOFless - having no single point of failure. The idea is to have a RING-MIS function as a huge online archive - giving tape-like capacities but disk-grade access time.
This performance aspect is highlighted by SGI. Its engineering VP, Jose Reinoso, said: "Scality’s RING architecture allows us to offer our customers cost-efficient petabyte-scale storage with independent scaling of throughput. It is the best of both worlds.”
It's also getting to be a crowded world because other suppliers are also looking at object storage as a cloud vault for unstructured data.
- DataDirect Networks with its Web Object Scaler (WOS)
- EMC with its Bourne project
- HP with an internal object storage project
- NetApp with its StorageGRID
- Quantum with Lattus, combining StorNext and Amplidata object technology
- Dell with Caringo object technology
- HDS with its HUS storage and Hitachi Content Platform
Looks a pretty crowded market, doesn't it, with more suppliers than data at the moment. The suppliers of vast archive services need to make a decision if they want to offer fast online access, meaning object storage using disk drive arrays, or slower access using tape, as Amazon's Glacier service does. Fast access should cost more than slow access - leaving tape as the last resting place for archive data. ®
Sponsored: Benefits from the lessons learned in HPC