Cray dips toe in supercomputing-as-a-service

Gene research a test market for cloudy graph engine

By Richard Chirgwin

Posted in HPC, 17th May 2017 01:03 GMT

With AWS, Google, and IBM's Watson already camped in the high-performance cloud business, it's hardly surprising that Cray would tread carefully as a late entrant into the supercomputer-as-a-service business.

The premium-level HPC vendor has decided to start small both in terms of target market and in geography: it's inked a deal with US data centre operator Markley to run genomic and biotech workloads for customers in just one bit barn located in Cambridge, Massachusetts.

The service is based on Cray's Urika-GX appliance, latest in a line of big data monsters introduced in 2012.

Urika's architecture is specific to graph applications: massively multithreaded, with a heritage that reaches back to spookdom before Cray allowed it to reach the public.

The Urika-GX landed last year, with Intel E5-2600 v4 processors (up to 48 nodes and 1,728 cores), 35TB of PCIe, the Aires high-speed interconnect, and 22TB of on-board memory. It's pre-installed with OpenStack and Apache Mesos.

The hardware spec is nice, but it's the Cray Graph Engine the company hopes will convince super-shy gene researchers the service is better for them than running up a super service from one of the existing clouds.

Cray's head of life science and healthcare told The Register's HPC sister publication The Next Platform a Cambridge sequencing centre that tested the offering hit a “five times speedup on parts of their overall workflow” compared to a standard cluster.

If you consider this as a beta to a limited market, it's hardly surprising that Cray hasn't announced pricing yet – nor that for now, it looks more like a “time-share” model than a fully-cloudy offering, with access to the supercomputer-as-a-service booked through Markley.

One reason for picking Markley's data centre as the first home for the service is its connectivity, on a local basis, to major networks. The company claims more than 90 carriers and network providers have a presence in the facility.

That's important for potential customers, because they will (naturally enough) have to upload their data to the service before the resident supers can get busy crunching data.

So users can pre-test their data and scripts before pressing “go” on an expensive supercomputer run, there's a virtualised Urika GX.

For storage, Markley will offer a suitable array in the colo (if customers have their own storage in the facility, they can use that instead). ®

Sign up to our NewsletterGet IT in your inbox daily

1 Comment

More from The Register

Nvidia's supercomputer-in-a-box needs 3.2kW of juice

Gone in 60.121 seconds: Your guide to the pricey new gear Nvidia teased at its annual GPU fest

GTC Yours if you can afford it... and wait long for the fabs to make the chips

Nvidia reports record revenues in latest fiscal quarter

But cryptocurrency not growing for the GPU giants

Ruskie boffins blasted for using nuke bomb lab's supercomputer to mine crypto-rubles

Kremlin goes nuclear on sly digi-cash-crafting eggheads

Three in hospital after NSA cops open fire on campus ram-raid SUV

Roses are red, spy agencies are black, US g-men don't fsck around when under attack

SciNet supercomputer's GPFS trick: We node what you did, burst buffer

Good news for Canadian HPC models

NSA sought data on 534 MILLION phone calls in 2017

Compared to 151 million in 2016, perhaps due to dupes rather than spy boom

ISO blocks NSA's latest IoT encryption systems amid murky tales of backdoors and bullying

Experts complain of shoddy tech specs and personal attacks

Try our new driverless car software says Nvidia, as it suspends driverless car trials

Post crash test hits share price

Nvidia quickly kills its AMD-screwing GeForce 'partner program' amid monopoly probe threat

GPU giant rails against rumors of stiffing sellers