Original URL: https://www.theregister.com/2012/06/25/t_platforms_t_mini_p_baby_super/

T-Platforms to roll out itsy-bitsy HPC cluster

A cute li'l Windows HPC Server on casters

By Timothy Prickett Morgan

Posted in HPC, 25th June 2012 23:41 GMT

If you are looking for a desktop supercomputer cluster that can use X86 or a mix of X86 and GPU coprocessors to run simulations, then Russian supercomputer maker T-Platforms has a machine for you. Or rather, it will by this fall.

It's called the T-Mini P, and it is basically a P8000 blade enclosure mounted on wheels. Not so you can tool around the office on top of it, riding it like a crazy stupid expensive scooter, but so you can roll it into a corner and treat it like a personal cluster.

At a $50,000 price tag, the T-Mini P is meant to fit within the discretionary budget that a lot of departments have. That's right, it's designed to be snuck into the office past the bean counters – El Reg wonders how quiet those wheels are – so you can discretely run parallel Linux and Windows workloads.

The li'l fellow doesn't have enough disk drives to be useful as a baby Hadoop cluster or to do other kinds of analytics work, so don't buy this T-Mini P for its unintended purpose. This baby is all about number-crunching and simulation.

The T-Mini P plays no favorites between Intel and AMD for CPUs or Nvidia or AMD for graphics cards; it does, however, only offer Nvidia Tesla cards as GPU coprocessors, basically because AMD's FireStream GPU coprocessors have fallen off the edge of the earth.

The P8000 chassis can take one head node and either eight skinny nodes or four fat nodes, the latter being large enough to put GPUs in the blade to act as supplementary compute engines. You can use any X86 processor with a 115 watt thermal design point or lower.

All of the nodes in the T-Mini P support 16 main memory slots using DDR3 memory sticks, and with 16GB sticks, you top out at 256GB per node.

Interestingly, Microsoft's Windows HPC Server 2008 R2 variant is the primary operating system intended for the boxes, not Linux in any particular flavor. However, Linux will obviously run on the setup, although T-Platforms is making no commitment about supporting Linux on this machine, as it most certainly does on its full-scale clusters.

T Platforms T-Mini P, front view

T Platforms T-Mini P, front view (click to enlarge)

The machine comes with a head node or workstation node, depending on if you want to configure it as a baby compute cluster or as a visualization engine.

The P200H is a two-socket blade based on the Intel Xeon E5-2600 processor announced in March, and it is using the C602-A chipset. This P200H node has plenty of storage, with up to 16 SATA or SAS drives, plus four PCI-Express 3.0 slots and four Gigabit Ethernet ports on the mobo.

The P205H is the Opteron version of the head node, which is based on the Opteron 6200 processor and the SR5670/SP5100 chipset and sports four PCI-Express 2.0 slots and two Gigabit Ethernet ports.

Either node can have up to two discrete graphics cards (Quadro for Nvidia or FirePro for AMD), and if you want GPU coprocessors, there's room for two Tesla cards. You can add QDR or FDR InfiniBand or 10GE or 40GE Ethernet adapters to the PCI slots if you want killer networking.

The standard computer node is a two-socket machine, just like the head node, but with only room for two SATA or SAS disk drives or SATA solid state drives. The compute nodes have two Gigabit Ethernet ports on their boards and have no expansion slots at all; there is an optional mezzanine card into which you can snap a single port QDR InfiniBand adapter.

T Platforms T-Mini P, rear view

T Platforms T-Mini P, rear view (click to enlarge)

If you want to add GPUs to the cluster, you need to use the fat nodes, of which you can only get four inside the chassis along with the GPU. The P200F node is a two-socket Xeon E5-2600 machine that has four drive bays and a single PCI-Express 3.0 slot to hook a single Tesla GPU coprocessor into. The P205F node is a two-socket Opteron 6200 fat node with one PCI-Express 2.0 slot for linking to the Tesla and is otherwise indistinguishable from the P200F fat node.

A configuration with a head node and four server nodes can run off two separate 110/120 volt wall sockets, but if you want to add GPUs or four remaining CPU nodes, you are going to need 220/240 volt power.

In the back of the chassis, you can slide in a 20-port Gigabit Ethernet switch that has two 10GE uplinks or a 20-port QDR InfiniBand switch if you feel the need for speed.

As you can see from the front of the chassis, the T-Mini P also has an integrated chassis management controller and a touch LCD display on the front that shows what is going on inside the baby cluster.

The T-Mini P will be available in September. ®