Data Centre

HPC

Cray dips toe in supercomputing-as-a-service

Gene research a test market for cloudy graph engine

By Richard Chirgwin

1 SHARE

With AWS, Google, and IBM's Watson already camped in the high-performance cloud business, it's hardly surprising that Cray would tread carefully as a late entrant into the supercomputer-as-a-service business.

The premium-level HPC vendor has decided to start small both in terms of target market and in geography: it's inked a deal with US data centre operator Markley to run genomic and biotech workloads for customers in just one bit barn located in Cambridge, Massachusetts.

The service is based on Cray's Urika-GX appliance, latest in a line of big data monsters introduced in 2012.

Urika's architecture is specific to graph applications: massively multithreaded, with a heritage that reaches back to spookdom before Cray allowed it to reach the public.

The Urika-GX landed last year, with Intel E5-2600 v4 processors (up to 48 nodes and 1,728 cores), 35TB of PCIe, the Aires high-speed interconnect, and 22TB of on-board memory. It's pre-installed with OpenStack and Apache Mesos.

The hardware spec is nice, but it's the Cray Graph Engine the company hopes will convince super-shy gene researchers the service is better for them than running up a super service from one of the existing clouds.

Cray's head of life science and healthcare told The Register's HPC sister publication The Next Platform a Cambridge sequencing centre that tested the offering hit a “five times speedup on parts of their overall workflow” compared to a standard cluster.

If you consider this as a beta to a limited market, it's hardly surprising that Cray hasn't announced pricing yet – nor that for now, it looks more like a “time-share” model than a fully-cloudy offering, with access to the supercomputer-as-a-service booked through Markley.

One reason for picking Markley's data centre as the first home for the service is its connectivity, on a local basis, to major networks. The company claims more than 90 carriers and network providers have a presence in the facility.

That's important for potential customers, because they will (naturally enough) have to upload their data to the service before the resident supers can get busy crunching data.

So users can pre-test their data and scripts before pressing “go” on an expensive supercomputer run, there's a virtualised Urika GX.

For storage, Markley will offer a suitable array in the colo (if customers have their own storage in the facility, they can use that instead). ®

Sign up to our NewsletterGet IT in your inbox daily

1 Comment

More from The Register

Nope, the NSA isn't sitting in front of a supercomputer hooked up to a terrorist’s hard drive

Analysis They wish. Backdooring, encryption, and governments

Cray's pre-exascale Shasta supercomputer gets energy research boffins hot under collar

US DoE sees multiple CPU, GPU, interconnect support and snaps one up for $146m

An $18m supercomputer to simulate brains of mice in the land of Swiss cheese. How apt, HPE

Wow, the SGI brand. What a blast from the past, sorry, future

Nvidia adds nine nifty AI supercomputing containers to the cloud

Now you can splash out on tons of GPUs if you really need to

Spotted: Miscreants use pilfered NSA hacking tools to pwn boxes in nuke, aerospace worlds

High-value servers targeted by cyber-weapons dumped online by Shadow Brokers

NSA dev in the clink for 5.5 years after letting Kaspersky, allegedly Russia slurp US exploits

Bloke sent down after spilling Uncle Sam's cyber-weapons

AI, AI, Pure: Nvidia cooks deep learning GPU server chips with NetApp

Pure Storage's AIRI reference architecture probably a bit jelly

US regains supercomputer crown from Chinese, for now

America! FLOP yeah!

Heads up: Fujitsu tips its hand to reveal exascale Arm supercomputer processor – the A64FX

Hot Chips AKA how it learned to stop loving SPARC64

HPE pulls sheets off largest Arm-based supercomputer Astra

Will run national security, energy workloads