This article is more than 1 year old

AMD's SeaMicro: 'We're the mystery vendor behind Verizon's cloud'

Big win for big box means proprietary hardware ISN'T DEAD, it's only resting

Verizon's ambitious new cloud services sit on an undisclosed number of 512-core SeaMicro appliances from AMD, casting doubt on the conventional belief that clouds can only be made of pizzabox servers provided by low-cost suppliers.

AMD announced the collaboration on the SM15000 appliances on Monday, and the partnership represents a new strategy for the chipmaker as it spent more than two years working with Verizon to co-develop special features built atop the tech.

Just as AMD does chip customization for its game console customers Microsoft and Sony, and Intel does Xeon tweaks for demanding data center clients such as Facebook and eBay, AMD has shared expertise with Verizon to let it get the most out of the platform.

"We collaborated with them to invent a set of tech that allowed their software to take unusual advantage [of SeaMicro]," AMD's server chief Andrew Feldman told The Register.

The company has been working with Verizon "for about two years" on the project, and at any one point in time there have been 10 to 15 AMD people "collaborating with or implementing" the results of the Verizon collaboration, Feldman said.

The tweaks have let Verizon introduce fine-grained server configuration options that allow for more flexibility in instance-sizing than with rivals such as Amazon and Google by letting administrators select a processor speed between 500MHz and 2GHz and scale DRAM up and down in 512MB increments.

It also lets Verizon share disks across multiple server instances rather than mandating a dedicated drive per machine, lets it put in place greater network security policies, and allows it to cut the time it takes to provision servers.

The SeaMicro appliances afford the company greater network flexibility than before, due to their programmable network processing units (NPU).

"There is a fabric in each SeaMicro chassis," Verizon's chief cloud technology officer John Considine tells El Reg. "The hosts are directly connected to this fabric. There are also NPU's connected to this fabric which are connected to our core switches. The NPUs run at 100Gb/s and rewrite the packets as they flow through the system."

The underlying technology for this is SeaMicro's "Dynamic Computation-Allocation Technology" which pairs CPU management with stateful load-balancing technology.

DCAT creates virtual IP addresses that can be "assigned to pools of computations on as a few as one core and as many as 768 cores," according to a SeaMicro document.

"Traffic can be directed to a pool of CPUs to ensure that they are operating in the maximally efficient range, while allowing other CPUs to enter deep-sleep mode or even to be turned off," SeaMicro says. "Similarly, a utilization threshold for a pool of computations can be set, and if met, CPUs can be dynamically provisioned and added to or removed from the pool."

Each SM15000 has a 1.28 terabit-per-second aggregate networking fabric with 16-by-10GbE uplinks to the network, along with 10Gbps of duplex bandwidth to each CPU socket.

Verizon's decision to opt for SeaMicro highlights a quiet shift that has been occurring in the key suppliers to massive cloud data centers: after an initial period of flirting with 1U or 2U pizza box commodity servers, hardware makers are adopting an appliance-style model – but with a twist. These systems are sometimes called microservers, and although they are a small market today, they're projected to grow to at least 571,000 shipped units by 2014, according to iHS.

Intel, for example, is designing its rack scale architecture to let it create a photonic networking fabric that flows through each rack, lowering latency and increasing manageability.

Similarly, hardware-supplier HP has designed its own integrated appliance via its Moonshot system, to take advantage of the economics brought about by high density and get rid of unnecessary components. It's the rebirth of blade servers in a design that is much easier to get on board with – and cheaper.

"One of the things stacks of 1U machines can't give you is stacks of dis-aggregation," Feldman says, noting how in typical servers you have "compute that's hard-chained to storage that's hard-chained to I/O".

Facebook is doing the same sort of dis-aggregation openly through a variety of schemes announced at its Open Compute jamboree this year that will see it cluster together more processors and storage than ever before while dispersing networking I/O gear and control through its stack.

It all points to a coming shift in the way data centers are built that, Feldman believes, will see the essential unit of IT resource move up from the CPU and attached memory resources to a pool of compute, storage, and networking bounded within a latency envelope set by a local fabric.

"I think we are in the beginning of a long race," Feldman told us – and he's not averse to trotting along with the competition: the SM15000s bought by Verizon use a mix of AMD Opterons and Intel Sandy Bridge-era Intel Xeons. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like