AMD girds its engineering cloud for X86 battle
No Intel Inside – and whitebox Solaris workstations
The battle for the X86 market may end up in the desktop, laptops, and servers of the world, but it begins in giant compute grids that engineers use to simulate, test, and increasingly to design the future processors we all will crave if they do their work right.
Advanced Micro Devices has undergone many changes in the past several years as it bought graphics chip maker ATI Technologies, created its own server chipsets and come out with new "Bulldozer" and soon "Piledriver" cores for its Fusion and Opteron processors, and brought in a whole new technical and management team to fight for a larger share of the X86 business. Behind the scenes, AMD's IT department has been hard at work, too, building out its internal cloud to support the design and testing work of its 5,000 or so engineers, who comprise a large portion of the chip maker's 12,000 employees worldwide.
When Farid Dana came on board AMD in 2008 as director of IT, the company's back-end computing infrastructure was a mess, just like it typically is at any large company that has many divisions and has done major acquisitions. AMD had 18 data centers, which Dana said is a lot for a company its size, and thus one of the first things Dana did was start a data center consolidation project. At the same time, AMD has engineers working from design centers all over the world, and it needed to provide a single design cloud that they could use, but with resources sufficiently distributed to avoid the latencies that are inherent in accessing compute capacity over long distance data links.
"We wanted engineers to sit anywhere on the globe and be able to design chips," Dana tells El Reg.
So in 2009, AMD hatched a plan to consolidate its 18 data centers – some of which it inherited through the ATI acquisition – down to three. It would keep its data center at headquarters in Austin, Texas, shutter data centers in Markham, Ontario, and Sunnyvale, California that were used by ATI engineers for making graphics chips, and open up two new data centers - one in Swamee, Georgia (outside of Atlanta) and another in Kuala Lampur, Malaysia that would provide low latency to AMD employees working from the Asia/Pacific region. (Dana wanted to be crystal clear here: The engineers in Markham and Sunnyvale are keeping their jobs and staying right where they are; they are just going to be accessing the chip design grid remotely rather than locally.
Some of them with gritted teeth, to be sure, and still others white-knuckling their high-end, homegrown, whitebox Solaris workstations, which you will have to pry from their cold, dead hands.
The distributed grid that AMD uses for chip design and verification has a total of 120,000 cores at the moment, which is based on several vintages of Hewlett-Packard ProLiant Opteron machines with a smattering of Dell PowerEdge boxes and some home-grown whiteboxes.
HP hails from Houston and Dell from Austin, so the Buy Texas mentality is even odds for HP and Dell. HP was not the first server maker to endorse the Opteron chip - IBM was first with a single machine followed by an enthusiastic Sun Microsystems - but HP jumped in next well ahead of Dell and has been arguably more eager to sell Opteron machines over the past seven years. Dell does a very tidy bespoke Opteron server biz through its Data Center Solutions unit, but it doesn't often admit it because those DCS customers don't like Dell talking about their competitive edges.
The Austin, Markham, Sunnyvale, and Kuala Lampur data centers together have around 80,000 cores of the total 120,000 cores in the design cloud. The Austin data center is rated at 3.8 megawatts and is not only maxxed out in terms of how much power it can use, but additionally is not able to be extended without the power company having to route new power feeds out to the facility - something that was very expensive to do. But AMD needed to significantly expand its design cloud's capacity to speed up development as well as improve the fidelity of its designs, and so the company decided to consolidate CPU and GPU design on a distributed cloud while at the same time building new data centers in Georgia and Malaysia.
The Georgia data center is a 10 megawatt facility and it is where AMD will be adding the bulk of the incremental design cloud capacity first. The Swamee facility opened up on April 1, and is using HP ProLiant BL465c blade servers equipped with the latest "Interlagos" Opteron 6200s. For the first time, AMD is also going with HP switching inside the data center, picking A12500 end-of-row and E5280 top-of-rack switches to lash the blades together into a cluster. At the moment, the Swamee data center has 2 megawatts of power activated and the 40,000 cores are drawing about 1.4 megawatts, down from the 2 megawatts that 40,000 cores using the two prior generations of Opteron processors would draw if AMD was using them in the new facility, according to Dana's math.
AMD uses Platform Computing's Load Sharing Facility (LSF) to manage and schedule jobs on the design cloud's EDA software stack; the cloud itself runs on Linux, with chunks running Red Hat Enterprise Linux or SUSE Linux Enterprise Server. LSF knows the design cloud has a mix of different hardware configurations and processor vintages and is aware of the memory, CPU, and I/O requirements of different applications and can dispatch a job to the appropriate part of the design cloud (in any of the five data centers that are linked to make the 120,000-core cloud).
The design cloud also has virtualized server slices using VMware's vSphere hypervisor stack and View virtual desktop infrastructure (VDI) software for giving engineers what would otherwise be apps running on local workstations and clusters. Some engineers use Windows workstations to run certain design apps locally.
"We still have hardware engineers who still want a Unix workstation," says Dana. "You can't get them to let go of it."
In these cases, Solaris is the preferred Unix environment, and because Solaris supports both AMD Opterons and graphics cards, the simplest thing for the IT department to do is hammer together some whitebox Solaris workstations to make these engineers happy.
Of the 5,000 engineers working at AMD, about half of them are using the VDI slices from thin clients and PCs to do their work. Some of them might be doing it from the beach or the bar without their boss even knowing it. Which doesn't matter, anyway. What does matter is making chips that have competitive advantages over Intel Core and Xeon designs. ®