This article is more than 1 year old

It's splitsville for Panasas' blades: It's better for the metadata, kids

1 becomes 2: Director, storage blades must move apart... in order to grow

Analysis Panasas has separated out its Director blades in its latest ActiveStor iteration and put them in an ActiveStor Director 100 controller component product line to scale performance and capacity separately.

Panasas delivers scale-out and parallel file system access for high-performance computing with its ActiveStor nodes.

The company used to bring out ActiveStor (AS) models regularly every 12-18 months with, for example, the AS16 followed by the AS18, AS19 and current AS20.

The CPU, memory and capacity components were carefully related in these products to produce an optimum mix of performance and capacity. Each AS model has a director blade, DB20 for example, and 11 storage blades in its 4U x 12-slot chassis, with the storage blades having their own Xeon controllers to look after blade-level storage operations.

Now data volumes in its HPC parallel file system space have grown, as has the number of small files which have to be dealt with, meaning more metadata-level processing by the director processors. If the metadata look-up takes as long as the file lookup then this was not optimum, to say the least.

So it was time to separate out the Director functionality and let it grow apart from the storage blade boxes.

Panasas_AS20_to_AS100

Disaggregating the director function

AS Director 100

By centralising the director functionality on a separate director component, metadata lookup performance can be increased, it explained. A Panasas spokesbod told us: "The ASD-100 Director has double the metadata processing performance of the prior generation DB20 Director blade."

But how... and why?

"The ASD-100 Director has roughly double the CPU performance - we are shipping with an Intel Xeon E5-1630v4 quad-core Broadwell running at 3.7 GHz. - and exactly double the RAM capacity as the prior generation DB20 Director blade.  The software on the ASD-100 is also a bit more efficient as a result of the upgraded FreeBSD Operating System foundation and ongoing optimization of our software.  The net result is 2x metadata performance."

What is the hardware base? It is a 2U box with four processor nodes, a 500GB SSD for local storage, 8G+B if NVDIMM, 96GB DDR4 RAM, and a 2 x 40GbitE/4 x 10GbitE Chelsio NIC and dual redundant power supplies.

AS Hybrid 100

This is the direct storage-level follow-on from the AS20 with all 12 slots taken up with storage blades. It uses 12TB Helium-filled HGST disk drives and 1.9TB SSDs in a hybrid flash-disk system. The disk drives range from 4TB through 6, 8 and10 to 12TB, and the SSDs can have 480GB, 960GB or the 1.9TB capacity. Panasas' also replaced the single Xeon storage blade processor with a C2558 Intel Atom CPU (2MB cache, 2.4GHz). This new SoC CPU uses 22 per cent less uses 22 percent less power than before.

Panasas_AS_Hybrid_100

AS Hybrid 100 with director shelf on top (left) and shelves on top (right)

With the new director chassis and these SSD-loaded hybrid drive shelves the new ActiveStor has HW horsepower that's a heck of a lot faster than the AS20. And we should expect more as Panasas has begun talking about next-generation storage. Should we be thinking about 3D XPoint media? NVMe? It wouldn't say.

But what about the PanFS operating system software?

PanFS 7.0

We already know it has an upgraded FreeBSD foundation. PanFS provides NFS, SMB or DirectFliw acess to the underlying storage. There is an improved NFS server implementation and DirectFlow enhancements.

How?

"We have code in our Direct Flow Client that speeds up a file-tree-walk such as the Linux 'find' tool would perform by issuing 'stat' operations asynchronously and in the background just like your typical system issues read-ahead calls when reading sequentially through a file.  We do read-ahead when reading sequentially through a file too of course, but the stat-ahead feature is a unique advantage to PanFS.

"So, while we like unfair comparisons, the time to read a zero-length file is a more representative comparison to our competition.  Reading a zero-length file on PanFS is a purely metadata operation, the Storage Blades are not required, so it is also a fair measure of the performance increases that the ASD-100 will deliver.

"Our internal benchmarks using the mdtest tool* are indeed showing more than a 50 per cent reduction in the latency of reading a zero length file for a single process on a single client, and a greater than 5x increase in the number of zero length files that can be read in aggregate when multiple clients are operating.  We are only in beta release on the ASD-100 at this point and expect to tune the system even more as a result of data we’re collecting in real-world conditions now, and we hope to raise performance even further before general availability in Q1-2018.  As such, we won’t be releasing specific performance numbers until then."

It thinks it's seeing around 15 per cent more throughput at present.

Panasas has also added a better and dynamic GUI.

+RegComment

These systems are backwards-compatible with Panasas' existing AS20 and other systems. By cutting the cord, as it were, between the director functions and storage blade functions, and also introducing SSD blades, Panasas has suddenly enabled customers to specify AS 100-level feature combinations better customised to their specific environments.

This is quite apart from the sheer horsepower boost with new CPUs, fatter disks and the addition of SSDs. Suppose you fork-lift truck-replaced an AS20 installation with this AS100-level kit – what kind of performance boost might you see? Obviously it's a workload-type dependent, your-mileage-might-vary kind of thing, but you might start thinking about a 50 to 80 per cent improvement envelope.

Availability? By the end of the year we'll see the AS Hybrid 100 and the DirectFlow enhancements. The AS Director 100 and PanFS 7.0 will ship in the first quarter of 2018. ®

* You can find the MD Metadata test tool source here.

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like