Reg comments1

Cray dips toe in supercomputing-as-a-service

Gene research a test market for cloudy graph engine

With AWS, Google, and IBM's Watson already camped in the high-performance cloud business, it's hardly surprising that Cray would tread carefully as a late entrant into the supercomputer-as-a-service business.

The premium-level HPC vendor has decided to start small both in terms of target market and in geography: it's inked a deal with US data centre operator Markley to run genomic and biotech workloads for customers in just one bit barn located in Cambridge, Massachusetts.

The service is based on Cray's Urika-GX appliance, latest in a line of big data monsters introduced in 2012.

Urika's architecture is specific to graph applications: massively multithreaded, with a heritage that reaches back to spookdom before Cray allowed it to reach the public.

The Urika-GX landed last year, with Intel E5-2600 v4 processors (up to 48 nodes and 1,728 cores), 35TB of PCIe, the Aires high-speed interconnect, and 22TB of on-board memory. It's pre-installed with OpenStack and Apache Mesos.

The hardware spec is nice, but it's the Cray Graph Engine the company hopes will convince super-shy gene researchers the service is better for them than running up a super service from one of the existing clouds.

Cray's head of life science and healthcare told The Register's HPC sister publication The Next Platform a Cambridge sequencing centre that tested the offering hit a “five times speedup on parts of their overall workflow” compared to a standard cluster.

If you consider this as a beta to a limited market, it's hardly surprising that Cray hasn't announced pricing yet – nor that for now, it looks more like a “time-share” model than a fully-cloudy offering, with access to the supercomputer-as-a-service booked through Markley.

One reason for picking Markley's data centre as the first home for the service is its connectivity, on a local basis, to major networks. The company claims more than 90 carriers and network providers have a presence in the facility.

That's important for potential customers, because they will (naturally enough) have to upload their data to the service before the resident supers can get busy crunching data.

So users can pre-test their data and scripts before pressing “go” on an expensive supercomputer run, there's a virtualised Urika GX.

For storage, Markley will offer a suitable array in the colo (if customers have their own storage in the facility, they can use that instead). ®

Sign up to our Newsletter

Get IT in your inbox daily

Biting the hand that feeds IT © 1998–2017