Original URL: https://www.theregister.com/2014/08/22/red_hat_arm_server_standards/

Red Hat: ARM servers will come when people crank out chips like AMD's 64-bit Seattle

Standards to lift data center boxes out of device doldrums

By Neil McAllister in San Francisco

Posted in Systems, 22nd August 2014 20:45 GMT

LinuxCon 2014 It's practically a given that the ARM processor architecture – so beloved by makers of small devices everywhere – will graduate to servers soon. But before ARM servers can ship in any significant volume, a standardized hardware platform that specifically targets the data center is a must.

So sayeth Jon Masters, chief ARM architect for enterprise Linux giant Red Hat, who addressed the topic during a session at the LinuxCon 2014 conference in Chicago on Thursday.

Red Hat and others – most notably the Linaro consortium, of which Red Hat is also a member – have been working on getting Linux ready for ARM servers, and vice versa, for several years. But according to Masters, one challenge has been convincing hardware vendors that what has worked for ARM on mobile devices won't work for the data center.

"A lot of early servers – not just in the ARM case but with other architectures – were built using what I call an embedded mindset," Masters said. "So they continue what I affectionately call the 'embedded zoo,' which is really applying the design philosophy that you take with a mobile phone and applying that to a server."

Red Hat's Jon Masters

Red Hat ARM architect Jon Masters says you can't have 64-bit ARM servers without hardware standards

It's not that Masters sees anything wrong with how phone vendors have been building their devices. He admits that the embedded design philosophy has served Apple and the various Android mobe-makers extremely well.

But these efforts have been successful in large part because smartphone vendors build their kit so that the software is "welded" to the hardware as a fully integrated system. Whether they use an off-the-shelf ARM system-on-chip (SoC) component or they create their own – as Apple and Samsung have both done – each device they produce typically contains numerous software adaptations for its own, specific hardware.

Reinventing SoCs for the data center

The concept of highly integrated, power-conserving SoCs can also be a huge boon to the data center, Masters said. But having each chipmaker design its SoCs to totally different specifications, the way they do for the embedded market, is just no good for servers.

"General purpose computing platforms differ from embedded systems," he explained. "Software does not ship with the hardware. They're not welded together. People buy hardware from their vendor of choice, and then they go get their operating system from their vendor of choice, and they need that to work."

"If I've got 20 different possibilities for wiring up a serial port on a server, there's a problem."
– Jon Masters, Red Hat

We're not just talking about choosing between Linux and some other OS here, either. When today's IT admins buy a server, they also expect to be able to wipe whatever Linux distribution it came with and install another one, if they want. Yet with ARM SoCs designed for the embedded market, there's no such guarantee.

"There's no standard that tells you, for example, 'here's exactly how the system is going to boot, here's how you're going to find the kernel'," Masters said. "Not 'on this board go here and on that board go there,' but 'here's one way to do it.' There isn't that in some of these embedded technologies."

Masters doesn't believe that the software solutions developed for the embedded market – like the Device Tree and the U-Boot universal bootloader – are the right way to go for servers, either. They simply don't provide enough abstraction above the hardware to allow admins to treat ARM servers interchangeably, the way they do their existing x86 boxes.

"What we need instead are standardized hardware devices. In order to boot the system that we're using, we have to have a certain level of standards, going in … If I've got 20 different possibilities for wiring up a serial port on a server, there's a problem," Masters said.

ARM steps up

Fortunately, no one understands these problems better than ARM itself – and no company has a more vested interest in seeing that ARM-compatible processors find their way into enterprise data centers.

The Cambridge, UK–based semiconductor design firm's first serious advance into the server market was the introduction of the 64-bit ARMv8-A architecture in 2011. While some hardware makers saw the data center opportunity early and tried to develop 32-bit ARM servers – notably the now-defunct Calxeda – those designs never gained much traction, and Masters made it clear at LinuxCon that Red Hat, at least, "doesn't have a story in the 32-bit ARM space and doesn't see a need to make one at this point."

More recently, to help address the needs of system builders in adopting ARMv8-A, ARM has developed two new platform standards, with input from Linaro, major Linux vendors, and hardware partners.

The first, the Server Base System Architecture (SBSA), describes the minimal hardware devices that an ARM system should have available in order to boot. The initial SBSA spec was released at the Open Compute Project summit in January and quickly won support from across the industry.

The second and more recent standard – first published on Tuesday of this week, just ahead of LinuxCon – is the Server Base Boot Requirements (SBBR), which describes how an ARM server system should boot.

SBBR achieves its goals by requiring hardware to comply with the latest versions of two earlier standards: the Unified Extensible Firmware Interface (UEFI) 2.4 and its related spec, the Advanced Configuration and Power Interface (ACPI) 5.1.

"There are certain expectations that software running on top of a UEFI platform can have," Masters explained. "For example, a standardized way to install an operating system kernel, and a standardized way to get certain runtime services, like the time of day. I don't have to have a special driver for the realtime clock on my platform, because I have one UEFI RTC driver and that just works."

"We're working with a lot of these vendors to review drivers and discuss things ahead of time to make sure that they're doing it right."
– Jon Masters, Red Hat

Similarly, mandating ACPI support limits the kinds of SoCs that chipmakers can design to those that are suitable for general-purpose computing.

"Again, ACPI disallows extremely complex embedded platforms. It's not something I'd recommend on every embedded device. It explicitly tells you that you can't adopt certain design philosophies, going in. It is written with servers in mind," Masters said.

What ACPI offers in exchange, the Red Hat man said, is a strong form of platform abstraction. Operating system kernels don't need to be told how to initialize each feature on a given hardware platform, or which memory addresses to use to access them. Instead, the kernel can rely on the appropriate ACPI function to turn on the serial port, for example.

So, when?

Even with these standards available, it remains for server makers to implement them in a form that OS vendors can use. To help its hardware partners deliver workable designs, in July Red Hat launched its ARM Partner Early Access Program, which provides vendors with sneak peeks at early versions of what the company thinks an enterprise Linux solution for ARM will look like, even as it works to refine its own code.

"We're working with a lot of these vendors to review drivers and discuss things ahead of time to make sure that they're doing it right," Masters said.

So who among today's hardware vendors does Red Hat's chief ARM architect think is doing the job right? Not surprisingly, perhaps, at LinuxCon he gave special mention to AMD's "Seattle" SoC, which the chipmaker officially launched at the Hot Chips conference earlier this month and which Masters says is "done right in every single way."

"It's a standardized, server-grade SoC. It follows every single server design philosophy that AMD knows from working with x86, and it also follows all of the kinds of advice and guidelines that the industry has been working on over the last three years. It's a very, very nice design," Masters said.

But it's only now that the industry-wide ARM server efforts have begun to mature that AMD has been able to produce a product of such quality. Calxeda, for example – the Austin, Texas–based firm that developed some of the earliest ARM data center products but went out of business in December – didn't have the "second-mover advantage" that AMD enjoys now.

"The Calxeda guys were really wonderful people doing a really good job. I think they were bitten by being early to a new market," Masters said. "You can come early to a party, you can come on time, and you can come late. If you come early and nobody's there, then there's a problem. I think it was really just not quite the right time for them."

By most current accounts, the right time will begin later this year, with ARM servers becoming mainstream by late 2015. ®