A server the side of a credit card
The basic unit of computing in the SM10000 server cluster is an Atom machine with four components: the Atom Z530 processor, which runs at 1.6 GHz and which has two threads for execution; the "Poulsbo" US15W chipset; the SeaMicro ASIC, for virtualizing I/O and implementing the fabric; and a SODIMM memory slot. This server is about 2.2 inches by 3 inches, with the memory module on one side and the other components on the other. That's reducing a server from the size of a pizza box to the size of a credit card. Here's how they lay out on a single SeaMicro server board:
The SeaMicro SM10000 server board.
As you can see from the picture above, the SeaMicro SM10000 server board has eight Atom servers (one chip and one chipset) on a single printed circuit board. The smaller chip is actually the processor and the larger, darker chip is the chipset. The four ASIC chips that virtualize the I/O and implement the interconnect are along the bottom, and SeaMicro has designed the mobo so it links back into the chassis using two absolutely standard PCI-Express 2.0 x16 slots, side by side. (Let this be a lesson to you proprietary blade server makers with you non-standard backplanes and interconnect electronics). This board measures 5 inches by 11 inches.
The SM10000 chassis has 128 PCI-Express 2.0 x16 slots, arranged in eight vertical columms, four on the left and four on the right of the chassis. You plug in 32 boards (two columns of 16) on each side to get your 512 Atoms per chassis. Like thus:
The SM10000, front and side view.
With each Atom server having its own 2 GB SODIMM, the chassis supports up to 1 TB of main memory across the 512 server nodes. The chassis has room for up to 64 SATA or solid state disk drives in the front (you always pull cold air over disks, so they need to be in the front). The disks and server boards are plug and play, so you don't have to reboot to add capacity. The servers need to talk to the outside world, of course, so the homegrown networking fabric and switch created by SeaMicro for the SM10000 has uplinks, which you can see here:
The back-end of the SM10000 server chassis.
The chassis has different network modules, which offer 8 to 64 Gigabit Ethernet uplinks or 2 to 16 10 Gigabit Ethernet uplinks per chassis. The FPGAs implementing the load balancer and terminal software as well as the switching software are in the chassis.
The whole box burns under 2 kilowatts of juice running real workloads, which is a quarter of the power that a rack of two-socket x64 boxes will do.
The SM10000 will be available on July 30, with a base configuration running $139,000.
By the way, there is nothing about the SeaMicro architecture that precludes the company from supporting whatever processor architecture it wants. If someone wanted a bunch of servers based on ARM processors and was willing to pay for it, you can bet that SeaMicro could build it. Ditto for protocols and ports coming off the interconnect fabric. The architecture can support Fibre Channel or converged enhanced Ethernet, which allows for Fibre Channel to be run over 10 Gigabit Ethernet.
For now, Feldman says that SeaMicro is looking ahead to a time when Intel puts an entire Atom as well as its chipset, memory controller, and other goodies on a single piece of silicon. At that time, SeaMicro should be able to get a lot more servers and cores onto a single SM10000 system board. And the company will also eventually be able to link multiple SM10000 chassis together for integrated management, like stackable network switches do today.
The SM10000 took three years and many millions of dollars to develop and could be very quick (a lot depends on the software), but is nonetheless a complete unknown. Not the kind of thing that engenders any new technology to large, conservative customers. But the issues in power and cooling are so bad at many hyperscale data centers that enthusiasm for the SM10000 product, which has been rumored since last summer, was quite high ahead of the launch.
"We have big orders," says Feldman, with a laugh. "And we have a good-sized backlog."
This might actually be a machine that Google buys instead of making itself. We'll see. ®
The intel anti-fanboi speaks. Think you missed the point of the article. ITs not that it used Atom its that it ties lots of cheap processors together in a clever dense package.
When you stop being a whinetard about x86 dominance of a huge part of the general computing space maybe your opinion will count for somthing but I wont hold my breath.
Until them try hitting yourself with a cluestick - the rest of us would help but we have better things to do.
@Still not cost effective..
Depends, that's list price, at list the Xeon boxes would cost the same.
Remember that whatever you pay for power-in you pay 3-5x as much for cooling to get power-out, more in Texas or New York in summer.
Space IS limited if your server room is full, this is a lot quicker/cheaper than planning permission to build another server room next door.
The article mentions Atom Z530 and then x64 - Z530 doesn't support x64 - only x86.