Original URL: https://www.theregister.com/2013/04/17/state_of_linux_2013/

Linux in 2013: 'Freakishly awesome' – and who needs a fork?

Features, performance, security, stability: pick, er, four

By Neil McAllister in San Francisco

Posted in OSes, 17th April 2013 14:33 GMT

LCS2013 If there was a theme for Day One of the Linux Foundation's seventh annual Linux Collaboration Summit, taking place this week in San Francisco, it was that the Linux community has moved way, way past wondering whether the open source OS will be successful and competitive.

"Today I wanted to talk about the state of Linux," Jim Zemlin, executive director of the Linux Foundation, began his opening keynote on Monday. "I'm just going to save everybody 30 minutes. The state of Linux is freakishly awesome."

Zemlin said that each day some 10,519 lines of code are added to the Linux kernel, while another 6,782 lines are subtracted from it. All told, the kernel averages around 7.38 changes per hour – a phenomenal rate for any code base.

Zemlin went on to liken Linux to a multi-million dollar R&D project, on which over 400 companies collaborate – some of which, at the same time, are fierce competitors.

"This incredible platform is now more than just an operating system. Linux is really now becoming a fundamental part of society – one of the greatest shared technology resources known to man," Zemlin said.

He added, "I mean, it runs all of our stock markets, most of our air-traffic control systems, internet, phones, you name it ... most of the world's telecommunications systems ... this is really now beyond a movement and an operating system, this is now this real, shared, societal, important piece of work."

Bigger and better, and delivered faster

In a separate keynote session later on Monday, kernel contributor and LWN.net editor Jonathan Corbet noted that even as successful as Linux has been to date, the pace of kernel development is actually still accelerating.

Kernel 3.3 shipped on 18 March, 2012, and in the 13 months since, 3,172 developers have contributed some 68,000 change sets into the mainline kernel. The kernel is now 1.53 million lines bigger than it was a year ago.

Kernel 3.8 was the project's most active development cycle ever, with around 12,400 change sets merged for that one release alone. And the releases keep coming faster and faster. Just a few years ago, a new kernel shipped about every 80 days, while today a release cycle longer than 70 days is unusual. Kernel 3.9, which could ship as soon as this weekend, will have been in development for only around 63 days.

 Photo of Jim Zemlin at Linux Collaboration Summit 2013

Jim Zemlin: "Linux is going swimmingly."

"Even though we're getting busier and more active, we've gotten the process so smoothly functioning at this point that we're able to get the releases out more frequently while we're at it," Corbet observed.

Contributions from individual volunteers continue to decline, however, with the volume of changes submitted by individuals now at 11.8 per cent, down from nearly 20 per cent some years ago.

But Corbet chose to spin even this figure in a positive light. The decline in unaffiliated submitters, he said, was probably due to the nature of the job market today – namely, that any developers who have the drive and determination to get code accepted into the mainline kernel will probably find jobs in short order.

Furthermore, he said, while it's true that corporate contributions make up the majority of kernel changes today and have done so for some time, no single company has contributed more than 10 per cent of the code for any given kernel.

Although overall Linux kernel development is going well, however – or "swimmingly," as Zemlin put it – it's not without its challenges. New divisions and new debates have emerged, owing to the changing nature of the Linux user base and the wide variety of audiences the kernel now serves. According to Corbet, some of these battles reach to the very deepest levels of the kernel code.

"We used to have a lot of fights about CPU scheduling some years ago, when we were trying to figure out how you pick which process to run next," Corbet explained. "We pretty well solved that problem. You can always do better, but we don't argue about that anymore ... Instead of which process you run, it's more a question of where do you run it."

Enterprise or mobile? There's no easy answer

How you answer the question of which processes to run where depends largely on what problem you're trying to solve.

For example, for some kinds of workloads, the NUMA (non-uniform memory access) problem is all-important. This is particularly true of distributed application clusters. A long as a CPU in a cluster node is working on data that exists locally, within the node's own memory or storage, processing is fast. But as soon as that CPU needs to access data that exists on some other node in the cluster – data that must pulled across some slow type of transport, like an Ethernet link – performance can degrade rapidly. Some Linux kernel developers would like to improve the CPU scheduler to handle such situations better.

Still other developers see power consumption as the primary concern. Here the scheduler might be able to help by switching off CPU cores that aren't strictly needed, for example. Instead of running four cores at half utilisation, the scheduler could power down two cores and run the remaining two flat-out, saving considerable energy – given a few code changes, that is.

But actually getting these and other proposed changes to the CPU scheduler code accepted by the kernel maintainers is extremely difficult, mainly because these portions of the kernel are so central to so many customers' needs.

"This code is the embodiment of heuristics that have taken years and years to develop – a bunch of ways of managing the system that we know work well across a very wide range of workloads," Corbet said. "As soon as you start to perturb those heuristics, you run a very real risk of creating performance regressions on workloads that you can't even possibly know about."

It can be especially problematic when code that causes regressions actually makes it into the kernel, Corbet explained, because it might be years before enterprise Linux distributors start shipping that version of the code to their customers. By that time, kernel development will have already moved on, and any performance regressions will be very difficult to fix.

This situation creates an interesting conflict within the Linux community, with opinions drawn along almost factional lines. Many of the proposed changes to the scheduler, for example, have been submitted by developers from the likes of Linaro, Samsung, or Texas Instruments – companies working on the mobile and embedded side of Linux.

By comparison, the companies in control of the core components of the mainline kernel have names like Google, IBM, Oracle, and Red Hat – companies that are either enterprise vendors or that manage large, enterprise data centres themselves. The two groups tend to view the world from very different perspectives.

Blazing a trail forward, together

So is it time to fork Linux? Do we really need two different kernels – one for mobile devices and one for the data center? Corbet says no, particularly in light of how platform fragmentation hurt and ultimately undermined the commercial Unix market.

For one thing, he observes, in days past mobile developers have griped about such enterprise-oriented kernel "bloat" as support for symmetric multiprocessing, support for large amounts of memory, file systems that can handle large volumes, and a sophisticated networking stack. But while all of these features would have been completely unnecessary on a mobile phone just a few years ago, for today's smartphones they're essential.

  Photo of Jonathan Corbet at Linux Collaboration Summit 2013  

Jonathan Corbet: "We're now completely in the dark."

Similarly, while much of the work on power conservation in the Linux kernel has been driven by the mobile and embedded world, today's enterprises are waking up to the fact that their data centre power bills are expensive, and that reduced power consumption can benefit them, too.

"In fact, I would make the point that the insistence that we can make one kernel that works for everybody – for everything from your telephone through to the biggest supercomputers on the planet, and everything in between – is one of the biggest strengths of the entire Linux operating system," Corbet said.

But what's really interesting, he said, is how the nature of the squabbles that emerge within the Linux community have changed. Long gone are the days when the entire community was focused purely on catching up with what the commercial Unix systems had already done.

"There was a time when people would say that open source or free software wasn't capable of innovating. All we could do was follow taillights," Corbet said.

These days, he said, the Linux community has already surpassed the commercial Unix vendors, in terms of both features and user base. We're now at a point when a lot of the innovative work happens on Linux first and trickles out to other operating systems later – and that has interesting implications for Corbet's metaphor.

"If we were once following those taillights through the dark, well, those taillights have kind of faded away, and we're now navigating completely in the dark," Corbet said. "So we have to figure out where it is that we're going to go."

"And that's fine," he added. "We can do this." ®