Why not build a cluster out of WORKSTATIONS?
Oz chap explains how he powered 27,320 x 3072 pixel super-display
Australia's Monash University has just opened an amazing visualisation facility called Cave 2.
The facility offers an eight-metre long, 320-degree wall comprised of 80 3D monitors with a combined resolution of 27320 x 3072 pixels.
“We spend millions of dollars building supercomputers and then look at the results they produce on a $200 monitor,” Dr David Barnes of the university's Life Sciences and Computation Centre. Cave 2 is an attempt to give researchers a much better look at the results their efforts and supercomputers' muscle produce.
Streaming 3D data in real time to 80 monitors requires just the kind of hellacious quantity of bandwidth and computing grunt that one would imagine demands a nice fat server cluster.
So why is Cave 2's cluster built of workstations?
Graphics capabilities is one reason: the machines Cave picked can run two of the Quadro K5000s he chose to get the job done. Servers aren't set up to do that and aren't optimised for graphics.
Another reason for the workstation choice is an unusual in-rack arrangement that sees each pair of workstations share what Barnes calls a “quad controller” that takes video input and grooms it for consumption by Cave 2. Those machines and their 2U power supplies take up so much space that density becomes less of an issue.
The CAVE 2 visualisation facility
A key factor in the decision is that the workstations aren't significantly less manageable than servers. The CPUs shipped with Cave 2's chosen Dells offer Intel vPro which Barnes says isn't markedly more useful than the Intelligent Platform Management Interface (IPMI) offered in many servers.
A concession: Cave 2 is a very niche application. But plenty of other applications are using GPUs for compute and if the rig is no less manageable than servers, what's not to like?
Over to you, readers: why not build a cluster out of workstations? ®
Sponsored: Hyper-scale data management