Feeds

Red Hat pumps up Enterprise Linux to 6.3

Get ready to become a digital numad

Intelligent flash storage arrays

Ahead of its Red Hat Summit in Boston next week and the reporting of its financial results for fiscal Q1 yesterday, commercial Linux distributor Red Hat is pushing out its next iteration of the Enterprise Linux operating system for servers and workstations.

RHEL 6.0 launched in November 2010, and it was a major update, with more than 2,058 programs (twice as many as in RHEL 5) and a move to the Linux 2.6.32 kernel. With the RHEL 6.3 release available today, a big focus is the usual updating in the kernel and in the driver stack to take advantage of new hardware that has come to market in the past six months.

"Hardware enablement is a big piece of every release," Tim Burke, vice president of Linux engineering, tells El Reg. In this case, there are a number of optimizations that have been made explicitly for Intel's new Xeon E5-2400, E5-2600, and E5-4600 processors, which came out in March and April, as well as Advanced Micro Devices' Opteron 6200s, which launched last November.

There's a "full spectrum" of device driver updates, and tweaks to the KVM hypervisor being championed by Red Hat against VMware's ESXi, Citrix Systems' XenServer, and Microsoft's Hyper-V that improve memory handling and I/O breakpoint handling for virtualized guests. The update also includes tweaks to the Linux kernel to support forthcoming iron – new Power7+ and "zNext" processors are expected from IBM later this year and Intel and AMD are also working on new CPUs – but Burke cannot confirm or deny that these chips are already supported in the RHEL 6.3 release. "Some of the best stuff in our release, we can't even tout at the time because of NDAs," says Burke with a laugh.

But we all know that support for these future chips is in the kernel because we know they need to be tested before coming to market.

Life is somewhat easier for Red Hat now that it has dropped support for the Xen hypervisor with the RHEL 6.X family as well as killing off support for Itanium processors from Intel. While Red Hat supports the running of RHEL inside of IBM's PowerVM hypervisor on Power-based systems as well as on z/VM and LPARs on IBM mainframes, those hypervisors are under the control of Big Blue and are not Shadowman's problem. The situation could get more interesting once ARM chips get a proper KVM hypervisor, but as Burke points out, the ARM architecture does not have the on-chip support for virtualization that x86, Power, and mainframe processors have (as does Itanium and Sparc T series chips, which cannot run RHEL 6). Until ARM chips do get this, KVM has to run in paravirtualized mode and the "performance would not be that great."

With RHEL 6.3, Red Hat has the embedded KVM hypervisor in the operating system running atop those Xeon E5 and Opteron 6200 processors and inheriting that support from the underlying RHEL 6.3 kernel; guest operating systems are also able to pass through and make use of features on these new chips. Red Hat has also boosted the number of virtual CPUs and virtual memory that a VM can span atop the hypervisor. Virtual CPUs were boosted from 64 to 160, the latter being the top-end number of threads in an eight-socket box using Intel's ten-core "Westmere-EX" Xeon E7 processors with HyperThreading turned on. VM guest addressable memory was boosted from 512GB to 2TB, and the number of virtual disks that a VM can use was increased from 60 to thousands.

Burke pointed out that these maximums were considerably larger than the guest sizes on VMware's ESXi 5.0 hypervisor, launched last July, with its "monster VM" spanning 32 virtual cores and as much as 1TB of virtual memory atop the hypervisor, which can span 128 physical cores (or threads if you have them turned on) and 2TB of physical memory.

"The capacity of a VM on KVM pretty much closely matches bare metal," says Burke, "and this is important because virtualization is being heavily used for the flexibility it provides, through live migration and other features, not just to drive up utilization."

While Red Hat is ahead of VMware in allowing a virtual machine to more or less span the entire hypervisor if necessary, IBM's logical partitions from 1998 for AS/400s and the follow-on PowerVM hypervisor for supporting IBM i, AIX, and Linux were designed for this capability from the get-go. IBM was, however, a laggard on live partition mobility. So all hypervisor makers have their issues.

With the 6.3 update, for which you can read the release notes here, Red Hat is also tossing in a new tool called Virt-P2V, which is a program in an ISO image that you load up to grab the code running on bare-metal Windows or RHEL servers, sends it to a conversion host system, which then wraps it up to run in a KVM virtual machine.

The KVM hypervisor also supports live resizing of disk volumes underpinning the virtual machines, for both Windows and RHEL guests and for any file system and volume manager, not just the LVM/DM volume manager preferred by Red Hat for its distro. KVM also has a new scrub command to wipe clean any data related to a virtual machine, which deletes a VM and then replaces the sectors on a disk with zeros and then verifies that the zeroing worked, all in one command. LVM now also supports the creation of RAID 4, 5, and 6 arrays directly rather than having to do so through other tools.

Burke says that the software RAID tool, MD, is basically front-ending LVM, the logical volume manager, to provide this one step for striping up your disks. LVM also now sports thin provisioning, which is basically a way of telling an operating system (either inside a guest VM or running on bare metal) that it has the exorbitant amount of storage allocated to it that it craves but only giving it the amount of capacity it actually uses when it runs. (Like so many things in the computer biz, it lies.)

The 6.3 update sports a new feature called numad – short for NUMA daemon – which is a new autotuning feature for multiprocessor systems that use non-uniform memory access (NUMA) clustering. With a NUMA architecture, server sockets have control of a chunk of main memory local to them but have access through a point-to-point interconnect like HyperTransport or QuickPath Interconnect to the main memory on adjacent (or not so adjacent if you are doing eight processors) sockets.

Job placement on a system is usually done based on the availability of a CPU to do that work, but on NUMA systems, this can actually hurt performance if you put a job on one CPU but it needs data that is stored in memory on another CPU. To get the best performance possible from NUMA machines, propellerheads have learned to pin specific parts of their code and the data it uses to specific sockets – so as much as possible stays local. Because most Linux shops don't have this kind of skill, the performance team at Red Hat cooked up the NUMA daemon to do the job and data placement automagically, and Burke says that it can reach about 90 per cent of the performance level of hand-tuned code done by an expert over the course of several days.

RHEL 6.3 also includes the usual GCC compiler updates to improve performance, and has the OpenJDK 7 Java stack. ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
The cloud that goes puff: Seagate Central home NAS woes
4TB of home storage is great, until you wake up to a dead device
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
Intel offers ingenious piece of 10TB 3D NAND chippery
The race for next generation flash capacity now on
Want to STUFF Facebook with blatant ADVERTISING? Fine! But you must PAY
Pony up or push off, Zuck tells social marketeers
Oi, Europe! Tell US feds to GTFO of our servers, say Microsoft and pals
By writing a really angry letter about how it's harming our cloud business, ta
SAVE ME, NASA system builder, from my DEAD WORKSTATION
Anal-retentive hardware nerd in paws-on workstation crisis
prev story

Whitepapers

Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Internet Security Threat Report 2014
An overview and analysis of the year in global threat activity: identify, analyze, and provide commentary on emerging trends in the dynamic threat landscape.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.