Original URL: https://www.theregister.com/2012/02/24/3d_chips/

3D processor-memory mashups take center stage

'I have seen the future, and it is stacked'

By Rik Myslewski

Posted in HPC, 24th February 2012 01:51 GMT

ISSCC A trio of devices that stack layers of compute units and memory in a single chip to boost interconnect bandwidth were presented at this week's International Solid-State Circuits Conference in San Francisco.

Sharing the stage at the ISSCC's High Performance Digital session were three technologies; one prototype developed by IBM that places cache memory layers on top of a "processor proxy" layer, and two working chips – one developed at the University of Michigan, and another by the Georgia Institute of Technology working with KAIST and Amkor Technology, both in South Korea.

Note that these parts aren't merely RAM-stacked-on-top-of-a-processor packages such as, for example, Apple's A5. These are single parts with processor and memory closely coupled, married together in a single slab.

The ISSCC presentations each were titled in impressive boffin-speak – so impressive that we'll quote the title of each paper before we dig into a few of its details.

IBM's "3D system prototype of an eDRAM cache stacked over processor-like logic using through-silicon vias": Like the other two chips, the IBM prototype routes data, clock, and power signals through its layers – what IBM calls "strata" – by means of through-silicon vias (TSVs).

TSVs are essentaily just what they sound like: signal paths that are etched through a silicon layer and filled with a conductor. In IBM's prototype, the TSVs are copper-filled, and are about 20 micrometers (0.0008 inches) in diameter.

A 3D System Prototype of an eDRAM Cache Stacked Over Processor-Like Logic Using Through-Silicon Vias

IBM's TSVs are connected layer-by-layer with tiny conductive balls. (click to enlarge)

The prototype that IBM presented at ISSCC was a two-strata affair, but the design is intended to be extendable to more cache-memory strata. The design of those cache strata borrows heavily from the Power7's integrated L3 cache, including its embedded DRAM (eDRAM), IP library, logic macros, and design and test flow.

IBM didn't use a true processor for the base of its stack, but instead a proxy for test purposes only, which included circuits to exercise the memory and emulates the noise and power of a true processor up to 350 watts per square centimenter.

Slide from IBM's ISSCC paper, 'A 3D System Prototype of an eDRAM Cache Stacked Over Processor-Like Logic Using Through-Silicon Vias'

Unlike the two other larger-process 3D chips presented, IBM's is built at a snug 45nm

That high power level would be needed in the target four-strata design, seeing as how IBM says that the design's clock skew is estimated to be less than 13 picoseconds in a four-strata design, which would allow a "worst-case" L3 clock frequency of 2GHz, resulting in a data bandwidth of 450 gigabits per second.

How low can you go?

The University of Michigan's "Centip3De: A 3930DMIPS/W configurable near-threshold 3D stacked system with 64 ARM Cortex-M3 cores: The UoM's oh-so-cutely named Centip3De takes 3D chippery in a different direction – that of near-threshold computing (NTC).

NTC's focus is not to crank up processors with a boatload of juice in order to get their transistors switching at high frequencies, but just the opposite: to use just enough power to carry them over their operating-voltage threshold.

The advantage of NTC is clear: less power consumption – especially important if you're stacking compute and memory layers in the same chip, and don't want to watch the entire assemblage melt before your very eyes.

The disadvantage is equally clear: at low voltages, transistors switch slowly. However, if you have a large number of transistors in a large number of compute cores working on a highly parallelized workload, the voltage-supply math can work in your favor.

"By running at a lower voltage, we can have a higher energy efficiency and we can regain some of that performance loss by having many layers of silicon," UoM PhD student David Fick told his ISSCC audience.

One problem with NTC is that the ideal – most efficient – operating voltage for a compute core is lower than that required for its associated cache memory. The Centip3De solves this problem by running the cache memory at four times the clock of the compute cores – but cleverly clusters four cores per cache unit, and manages the cache distribution among them.

Slide from ISSCC paper, 'Centip3De: A 3930DMIPS/W Configurable Near-Threshold 3D Stacked System with 64 ARM Cortex-M3 Cores'

The current Centip3De is a two-layer prototype, but the team plans a seven-layer future (click to enlarge)

For example, if each core is running at 10MHz, as Fick showed in one example, the cache could run at 40MHz. The cores each see a single L1 cache, and the clustering allows them to share it at their own core operating frequency with single-cycle latency.

What's more, the Centip3De's cache design also allows one core to take over more cache space, should it need it, as long as another core's cache space could be reduced. There could conceivably be core data conflicts within the cluster, but Fick says that their team's architectural simulations had shown that "this was not a dominant effect."

In addition, cores could be entirely shut down – dynamically, of course – and their power could be passed to another core, thus increasing their frequency. You could, for example, have four cores in a cluster running at 10MHz each, or one at 40MHz, depending upon the needs of the workload. Entire clusters can be shut down, as well, and their power shunted to adjacent clusters.

Slide from ISSCC Paper, 'Centip3De: A 3930DMIPS/W Configurable Near-Threshold 3D Stacked System with 64 ARM Cortex-M3 Cores'

Today's two-layer Centip3De processor and DRAM layers are of different process sizes (click to enlarge)

The current Centip3De chip was built using a 130nm process. The paper presented at ISSCC says that if the cores running at 10MHz in the prototype chip were baked using a 45nm SOI CMOS process, that'd translate to 45MHz per core. Fick told his audience that if the process were scaled to 32nm, those 10MHz cores could operate at 110MHz.

Those higher clock speeds would, of course, be throttled down if the compute cores were operated at near-threshold voltages. *

Real apps, real benchmarks

Georgia Institute of Technology's "3D-MAPS: 3D massively parallel processor with stacked memory": The team behind the 3D-MAPS processor went one step beyond the IBM and UoM's chips by creating a processor that performed real, benchmarkable work.

After rattling off a list of processor/memory-mashup research papers, 3D-MAPS team member Sung Kyu Lim of the Georgia Institute of Technology told his ISSCC audience that he had been unable to find another team that had been able to build a prototype that could be programmed to handle actual workloads.

"I am very happy to say that we accomplished that goal," Lim said.

3D-MAPS's silicon was fabricated by GlobalFoundries at 130nm, and layer-to-layer bonding and TSV technology was provided by Tezzaron Semiconductor, which has its design engineering and sales center in Naperville, Illinois, and process design and manufacturing in Singapore. The chip runs at 1.5 volts and consumes up to 4W, resulting in a power density of 162 watts per square centimenter.

Like the Centip3De, 3D-MAPS has 64 cores. Unlike the Centip3De's ARM Cortex-M3 cores, however, 3D-MAPS has 64 VLIW (very long instruction word) cores of the team's own devising, each running at 277MHz.

Slide from ISSCC paper, '3D-MAPS: 3D Massively Parallel Processor with Stacked Memory'

3D-MAPS's cores and layout may be simple, but version two is already on the way (click to enlarge)

Lim described the design of the cores as "not full-fledged." There's no floating-point unit, for example, just one five-stage arithmetic pipeline and one data-memory pipeline that can sustain one 4-byte data memory operation per clock cycle. Each core is hooked up to 4KB of "scratchpad" SRAM – the memory isn't globally shared; each core has access only to its own scratchpad.

Lim said that in one of the benchmarks that 3D-MAPS ran – a median filter test – memory bandwidth came in at a hair below 64 gigabytes per second. The team claims that at the 277MHz clock rate, the memory bandwidth will theoretically max out at 70.9GB/sec.

Slide from ISSCC paper, '3D-MAPS: 3D Massively Parallel Processor with Stacked Memory'

As might be expected, different tasks performed on 3D-MAPS have different performance profiles

The cores are connected "neighbor to neighbor," Lim said. "We wanted to have a network on the chip, but we didn't have time to finish it."

Part of that time pressure, no doubt, was due to the fact that according to Lim there are no commercial 3D-chip CAD tools on the market – he and his team had to write their own scripts and plug-ins for existing 2D tools such as Encounter from Cadence, PrimeTime from Synopsys, and others.

That said, version two of 3D-MAPS – dubbed, appropriately, V2 – has already been taped out, and should be ready by next year's ISSCC or sooner. V2 will have 128 cores, 256MB of DRAM and 512MB of SRAM, and other improvements, such as using TSVs not just mainly for IO but also for communicating with DRAM.

When 3D-MAPS V2 appears, there'll likely be more to hear about IBM's 3D efforts, the seven-layer Centip3De shoud have appeared, and undoubtedly other efforts will come to fruition, as well.

ISSCC 2013 may very well be 3D processors' debutante ball. ®

* Update

The University of Michigan's David Fick emailed us to correct our comment on throttling smaller-process chips down to near-threshold voltages. "Those numbers (45MHz for 45nm, and 110MHz for 32nm) were for the voltage scaled cores," he writes. "We have an ARM Cortex-M3 operating at 750MHz in another 45nm chip, which is how the 45MHz number was estimated. 32nm was based on simulation."

He also added a bit of info on future steps planned for the Centip3De project. "I'd like to also mention that Centip3De will be able to run the same sorts of benchmarks that 3D-MAPS can run," he writes, "but we need to have the DRAM to do it (hopefully coming later this year). We use a commercial core, which allows us to use a C++ compiler, etc, but we are just a bit short on memory space."