This article is more than 1 year old

Storage start-ups fail to set the world on fire

How IT fell for file storage growth myths

Four file storage problem groups

First was spin-down. Copan and Nexsan and others thought the way to make file storage less onerous was to spin down disk drives and work with a pair of supporting dynamics. One was coming power shortages in metropolitan areas, combined with environmental carbon emission-cutting thinking. The other was data centre space limitations. If you cut electricity use by spinning down disk drives and pack the drives better, then you reduce power draw and space needs simultaneously. You can store more files in less data centre floorspace and need less power. It was a triple whammy win that could not fail.

Secondly, there was file virtualisation. You interpose a special server box between application servers and the multifarious file stores and have all the files in all your file stores represented in this one box inside a global namespace. You virtualise the file stores, so it looks like there is just one file storage universe which app servers can tap into. Acopia and Rainfinity and FilesX tried this route to bring sense to the file storage horror story.

Thirdly, general archival storage boxes sprang up, with EMC's Centera being the obvious one. Others are also plugging away at this space: Caringo, Mimosa, and Waterford. Plasmon tried - and died - here too. Lots of people saw that Centera was sky high in price and as proprietary as you like, and came up with commodity hardware/open software alternatives. None of them toppled Centera from its throne because they weren't good enough, and there wasn't a sufficiently general problem to prompt widespread adoption of their products.

Instead, specific archival storage products - ones focussed on e-mail or SharePoint - have survived and are developing into general archival products, with good compliance and e-Discovery functions. There is a developing market for these products, but it's not as large or as widespread as early product developers hoped.

Fourthly, we saw the development of scale-out filers, often using some form of clustering, to solve the problem of serving very large numbers of files, often large files, to a set of servers simultaneously. The large files would be split across several filers or sub-files with parts served in parallel. Ibrix developed software for this. BlueArc developed FPGA hardware-accelerated super-NAS products. Isilon, Exanet, and ONStor developed clustered filer hardware and software. Again, there is a real problem here but customer interest turned out to be concentrated in two areas and not be general.

Digital movie effects meant that rendering scenes needed massive file delivery horsepower and that benefitted Isilon, Ibrix and BlueArc. It also benefitted some block storage suppliers, like Data Direct Networks but we'll ignore those here because this is a file-focussed story.

High-performance computing (HPC) and supercomputing also needed the same sort of massive filer bandwidth to cope with seismic, simulation and genome-type data. However general business did not.

ESG's Steve Duplessie points out that Web 2.0 companies like Amazon, Google and Yahoo also had an internal need for scale-out filers, and sometimes built their own infrastructure for this in a massively impressive way. It didn't generally benefit our scale-out NAS startups, though, and was specific to these massive-scale Internet-based service suppliers, not to everyday business.

More about

TIP US OFF

Send us news


Other stories you might like