Panasas on server flash cache: 'What problem are you solving?'
We don't need the speed, we don't heed the feed
ISC12 Storage array feeding of server flash caches is not needed for high-performance computing because network latency is negligible - according to parallel storage biz Panasas.
Geoffrey Noer, Panasas's senior product marketing director, tells the Reg:
"We are not under pressure from our customers to deliver more performance. In principle we could extend PanFS to include cache in servers but what problem are you solving? We don't see the problem. For all general purposes the latency of the network is invisible. We haven't seen a need for customers to use flash because the file system is so fast."
In enterprise storage the idea that applications can be speeded up by getting rid of storage network latency is gaining ground. EMC is leading the mainstream storage array suppliers' charge in this area with its VFCache, the placement of a PCIe flash card into servers and software enabling it to cache data from an EMC storage array and give applications virtually instantaneous access to it.
This follows on from pioneering work by start-ups like Fusion-io, whose ioDrive PCIe flash card products are being used by Facebook and others to give array I/O-bound application software a swift boot up the rump. You would think that what works with enterprise IT would work in the HPC world as well, where application execution speed is important and hundreds of compute nodes access tens of petabytes of data.
Panasas builds ActiveStor 11 and 12 storage systems using its PanFS parallel file system to deliver up to 1,600MB/sec write IOPS per storage shelf and 1,500 read IOPS to HPC compute nodes. Its systems scale up both capacity and I/O performance to 150GB/sec of bandwidth yet use bulk 3TB SATA disk drives, far from the fastest horse around the storage block.
Noer doesn't discount the idea of server cache feeding in future though.
"Exascale is down the road," he says, meaning that what uses petaflops and petabytes of storage today will be using exaflops and exabytes in a few years' time.
This growth in storage needs by HPC customers can't be ameliorated by using deduplication and compression, both common tactics used by all-flash array startups like Solidfire. Noer adds:
"In HPC, deduplication and compression are negative performance factors. HPC workloads are not generally compressible."
Noer said that Panasas sees its customers' HPC workloads as different from the random I/O-intensive enterprise workloads which benefit from flash data access acceleration in servers.
IBM's HPC architect, Crispin Keable, says GPFS – IBM's General Parallel File System – already supports storage tiering and he thinks his customers would see appreciable application acceleration from having flash-enhanced servers. He can foresee benefits that could be gleaned from having GPFS look after that server cache, and mentioned the idea of GPFS using flash itself to store file metadata and so spend less time looking up files and more time reading and writing data for clients. Panasas founder Garth Gibson is of a similar mind.
There may well be workload differences between Panasas and IBM HPC customers behind the two contrasting viewpoints. With Panasas emphasising its commercial credentials as an HPC supplier more strongly and with Exascale "down the road", it will be interesting to see how things develop. ®
Sponsored: Optimizing the hybrid cloud