Original URL: https://www.theregister.com/2014/05/06/arm_corelink_ccn_5xx_on_chip_interconnect_microarchitecture/

ARM targets enterprise with 32-core, 1.6TB/sec bandwidth beastie

Think the UK boys are just mobe-chip makers? Been out of town lately?

By Rik Myslewski

Posted in Personal Tech, 6th May 2014 12:01 GMT

Updated ARM has released more details about the innards of its cache-coherent on-chip networking scheme for use cases ranging from storage to servers to networking – specifically, its CCN-5xx microarchitecture family and its newest member, the muscular CoreLink CCN-508.

It's been almost exactly 29 years since Acorn Computers' ARM1 chip ran its first code on April 26, 1985. Since then, the company that was soon to become ARM branched out beyond CPUs to IP design for GPUs, microcontrollers, coherent and non-coherent interconnects, memory controllers, system memory management units, interrupt controllers, direct memory access controllers, and more, popping up in products from toys to smart cards to mobile devices to printers to servers.

"You name it, we generally build it," lead architect and ARM Fellow Mike Filippo told attendees at his company's 2014 Tech Day last week in Austin, Texas.

ARM's most recent expansion has been in the enterprise space, with work on high-end servers and networking IP. "In 2010," Filippo said, "we really began to take a hard look at the networking market, and we're getting a lot of interest from networking OEMs."

ARM's broad IP portfolio comes in handy when it's designing complex enterprise-level networking, storage, and server parts; the Cambridge, UK company can wrangle a passel of their own IP blocks into a larger assemblage, and license that überblock to chipmakers hungry to service the enterprise market.

And that's what ARM is doing with its CCN-5xx family of high-performance cache-coherent on-chip networking IP. The first product in the family was announced last year: the CoreLink CCN-504, and the newest member is the CoreLink CCN-508. ARM also has plans to not only create a scaled-up, higher-performance family member, but also to move in the opposite direction and create a more modest CCN-5xx for low-end servers and networking.

Each member of the family has a similar scalable topology and microarchitecture, and is based on version five of the advanced microcontroller bus architecture coherent hub interface (AMBA 5 CHI). AMBA is an open-standard on-chip interconnect specification first incarnated by ARM back in the mid-1990s, and is used to tie together different functional blocks such as those, for example, that comprise a system-on-chip (SoC) part.

AMBA 5 CHI: architectural scalability slide

At version five, li'l AMBA's now all grown up (click to enlarge)

The AMBA 5 CHI's architecture is topology-agnostic, able to support anything from point-to-point to mesh, and for the CCN-5xx product family, ARM chose a ring bus over which it works its high-speed element-to-element communication chores.

"Generally speaking," said Filippo, "the architecture was built and designed around scalability, scalability in all aspects: scalability of performance, scalability of design complexity, scalability of power, etcetera." And, apparently, scalability in the use of the word "scalability."

By way of example, the CoreLink CCN-508 scales up from its CCN-504 predecessor, allowing its licensees – ARM prefers the term "partners" – to hang onto it a veritable Noah's Ark of functional units: CPUs, DDR3 or DDR4 memory controllers, network interconnects, memory management units, and up to 24 AMBA interconnects for such peripherals as PCIe, SATA, and 10-40 gigabit Ethernet.

Actually, Noah's Ark isn't all that good an analogy, seeing as how the CCN-508 can bring onboard more than that old guy's limit of two of a kind. It will, for example, support up eight 64-bit ARMv8 CPU clusters of four cores apiece – each cluster could, for example, contain two beefy ARM Cortex-57 and two lightweight ARM Cortex A-53 cores in a big.LITTLE configuration*. Easy math: 32 cores, a doubling of the CCN-504's 16-core max.

Block diagram of ARM's CoreLink CCH-508

Here's one ARM implementation that you're not likely to find in your handset (click to enlarge)

In any case, what's of most interest in ARM's new multi-IP mashup is the way in which all its blocks communicate at high speed thanks to the CoreLink CCN-5xx family's modular, highly distributed microarchitecture. First off, the smarts that manage its transport layer are fully distributed, with no point of centralization; multiple modular "crosspoints" know everything they need to know to take data coming in from whatever source and route it to where it needs to go.

The whole system can run at CPU clock speed, so no core should be kept waiting long for the bits and bytes awaiting duty in a shared, distributed, global L3 cache. But this is not your father's CPU-centric L3 cache – this L3 has a few tricks up its sleeve.

* Update

After this article was a published, ARM got in touch with us with a clarification: the CCN-5xx architecture doesn't support a big.LITTLE configuration in its compute clusters; each max-of-four-cores cluster contains either Cortex-A57 or Cortex-A53 cores, just not both in the same cluster.

Level-three cache sneakitudinousness

Like most elements in the CoreLink CCN-5xx microarchitecture, the L3 caches are distributed, decentralized, and scalable – in the CoreLink CCN-508, for example, there are eight L3 partitions scalable from 128KB to 4MB per partition, depending upon the partner's needs. But these cache partitions, taken together, have more on their plate than normal L3s.

"We call them L3s," Filippo said, "but that's actually a misnomer. The reality is that these are a very powerful piece of the microarchitecture." Yes, the L3 does function as an L3 cache for the compute cores – which have their own L1 and L2 caches, of course – but they also function as a high-bandwidth, flexible I/O cache for the entire system, enabling data movement not only between the CPUs and the I/O devices or accelerators hanging on the bus, but also among those other devices themselves.

ARM CCN-508 scalable system architecture bus diagram

Multiple distributed modules allow for serious scalability (click to enlarge)

Speaking of those CPU caches, the CCN-5xx system architecture has another nifty feature in that it has what's Filippo described as an adaptive exclusive/inclusive policy. As he explained, "Basically what that means is that this L3, generally speaking, is not inclusive of the caches that exist within the compute cluster." Basically what that means is that there's no redundancy between all the compute-core L2 caches and the system's L3 cache partitions, thus allowing more room for cached data, improving efficiency and performance.

In addition, the L3 cache complex is also doing more than acting as a way station for data; it also functions as the coherency manager for the whole system. "It is the point of coherency and the point of serialization," Filippo explained. "Anybody familiar with system architecture knows that that's the fundamental piece to a coherent system," meaning that it's the one specific place in which all requests to a specific cache line – a specific address – are managed.

Finally, the L3 also contains its own snoop filter, but your humble Reg reporter hastens to admit that his graduate degree in dramatic art did not prepare him well for a cogent analysis of the CoreLink CCN-5xx microarchitecture's address-line monitoring of memory locations to ensure cache coherency. Sorry.

As mentioned above, the L3 system functions as an I/O cache, so we should also touch on the coherent I/O system itself. Bottom line: it's fast – each modular I/O bridge can provide usable bandwidth of over 40GB/sec, Filippo said.

Speaking of speeds and feeds, ARM claims that when running at 2GHz, the CoreLink CCN-508 can deliver up to 1.6TB/sec of usable system bandwidth – and that's "T" as in "tera". When equipped with DDR4 memory, its four-channel memory system can nudge up to around 75GB/sec.

That 1.6TB/sec sounds like a boatload of bandwidth – and it is – but remember that the CCN-5xx system architecture is based on one hell of a lot of activity going on at any one time, coursing round the ring bus from modular element to modular element – check out all the lines and arrows in the presentation slide above to get an idea of how busy the bus can get.

This modularity is key to the microarchitecture's scalability. "As we build new designs," Filippo said, "staging larger and larger systems ... we basically just change some words in our description-config files, and the system builds itself."

When you get right down to it, that's the core of ARM's business philosophy: provide a broad range of IP to the world, and let chip designers mix and match them in a squillion different ways for a gazillion different use cases – such as the enterprise storage, servers, and networking markets towards which the CCN-5xx family is targeted.

Basically just change some words in the licensing-contract files, and the bottom line builds itself. ®