Quanta crams 512 cores into pizza box server
Tilera chips spotted in wild
Quanta Computer – the Chinese manufacturer that builds the majority of laptops in the world and that wants to break into the server racket in a big way – has started shipping its first production machine based on massively multicored processors designed and manufactured by chip upstart Tilera.
The Quanta machine, known as the QSSC-X5-2Q, is a follow-on to the prototype SQ2 server that Quanta was showing off when Tilera announced its ambitious plans to move from 64 cores and mesh interconnect on its TilePro64 processors up to 100 cores in the second half of 2011 with the Tile-Gx100 chips and to 200 cores with the future "Stratton" processors due in 2013.
In the prototype box, Quanta tried out the 900 MHz TilePro64 part. But in the production-grade QSSC-X5-2Q box that the two companies were showing off at the Structure Big Data 2011 conference in New York last week, the chip speeds were dropped down to 667 MHz, exactly pacing the speed of the 667 MHz DDR2 main memory used by the system. Here's a top-view, and admittedly not a great photo, of the Quanta box:
And here's a zoom in of the individual server node:
That system board consists of two TilePro64 processors, with each processor being configured as a single, independent system node. Each processor has eight memory slots allocated to it, for a total of up to 32 GB of main memory, plus two Gigabit Ethernet ports, two 10 Gigabit Ethernet ports, and a single console and management port (these are 10/100 Mbit Ethernet ports).
Two of these boards are placed side-by-side in the chassis and stacked two high, for a total of eight server nodes. Eight nodes at 64 cores each gives you 512 total cores in a 2U chassis. The server boards slide out on individual trays and share two 1,100 watt power supplies that are stacked on top of each other and that are put in the center of the chassis. Each node has three SATA II ports and can have three 2.5-inch drives allocated to it; the chassis holds two dozen drives, mounted in the front and hot pluggable.
At the conference, Ihab Bishara, director of cloud computing products at Tilera, said the Quanta box was shipping now. (It was expected to start shipping in the fourth quarter, but that's the server business for you.)
Early customers looking at the box include hyperscale Web companies looking at running memcached caching as well as telcos who want something to do transcoding and what Bishara called "lawful interception," or what you and I would probably call spying on your citizens. The box is also being considered for Web serving, and Tilera and Quanta have ported a NoSQL database and the Hadoop open source MapReduce code open sourced by Yahoo and controlled by the Apache Software Foundation to the machines as well. (Hadoop support was being demonstrated at the big data event.)
While not providing any actual benchmark test results, Bishara said that running Web applications and memcached, the Tilera-based boxes would be able to replace an inefficient two-socket Xeon based server, node for node, running these workloads. And do so at a quarter of power per node - 50 watts for a single-socket TilePro64 node compared to around 200 watts for a two-socket Xeon node. Like many of you, I am skeptical about such claims and I look forward to the actual benchmark tests that prove it.
The Tilera chips support the Linux 2.6.36 kernel, and are widely believed to be based on the MIPS architecture (something Tilera has never confirmed). The chips support KVM for managing server virtualization down on the chips, and support PHP, Ruby, Perl, Python, and Java programming languages. The GNU gcc and gcc++ compilers and gdb debuggers are also supported.
Pricing was not announced.
I doubt very much that Google, which loves cheapo x64 servers, would ever use a Tilera-based box, but some upstart competitor wanting to push the thermal envelope down - say, a punk like Zuckerberg over at Facebook - might just do such a thing. ®
Fortunate, is it not,
that we who post here live in places like the US or the UK or - as I do - in Sweden - where no laws that permit government spying on citizens are on the books ; or if they are, they are only employed to protect us peaceful citizens from terrorists, spies, people who without permission share copyrighted material, and other such dangers to the health of the state....
Spying on your citizens?
I must say I'm shocked that anything like that could come out of China .... ;)
Spindle:CPU ratio bad for Hadoop
I'm putting on my Hadoop committer hat and noting some things about it on this box -independent of any other HPC uses-
1. Ignoring point (3) below, you don't need to "port" Apache Hadoop to the system provided you can bring up RHEL and Java on it, ideally 64-bit JVM from Sun, that being the only one that the Hadoop team opt to care about.
2. There's not enough storage. 24 HDDs for that many CPUs? The current generation of Hadoop servers put 12x 3.5" HDDs in a 1U rack with 6-12 x86-64 cores, giving a ratio of 1 CPU to 1 or 2 HDDs. That's massive storage capacity and good IO bandwidth, with good CPU. Why? Storage capacity with some local datamining is the driving need. It's why HDD and not SDD is the storage, it's why 3.5" disks are chosen over 2.5". It brings you cost/petabyte down.
3. The use of independent servers gives you better failure modes. If you built a rack out of these systems, you would need to somehow change Hadoop's topology logic to know that a set of servers are inter-dependent, and so that copies of blocks of the files (usually 128+ MB blocks) are not stored on servers instances in the same physical server. There's been discussion of making the placement policy pluggable, so Quanta could write a new Java class to implement placement differently, but as the plugin interface isn't there yet, they can't have done so.