SUSE Linux tunes up for latest iron with SP 3
Updates to virty machines and their hypervisors, too
SUSE Linux is juicing its Enterprise Server 11 variant of Linux with Service Pack 3. Among many nips and tucks, the SP3 update brings support for new and emerging hardware to the operating system.
The company, the open source operating system arm of the Attachmate conglomerate owned by the private equity trio of Francisco Partners, Golden Gate Capital, and Thoma Bravo, already moved to the Linux 3.0 kernel with SLES 11 SP2 in February 2012.
Matthias Eckermann, senior product manager for SUSE Linux Enterprise, tells El Reg that the company has rolled up all the recent patches and updates to the Linux kernel and back-ported them to the 3.0 kernel already certified in SLES 11 SP2. That kernel had support for up to 4,096 virtual processors, and now with the SP3 update, the chameleon-colored operating system can address up to 16TB of physical memory on a single system.
"As far as I know, no one has ever tested a machine that large," says Eckermann. "It will get interesting when you go beyond 64TB for a data warehouse because you cannot always partition a database. So it is good to have a very large memory space."
The Itanium and PowerPC/Power kernels can scale to 1PB of main memory, theoretically, while IBM's System z mainframes top out at 4TB and 64-bit x86 processors peak at 64TB. The tested and certified limit on an Itanium machine is 8TB, on a mainframe is 256GB, and on x86 machines is now 16TB. With the SP3 update, the PowerPC/Power kernel has been tested up to 16TB as well,as you can see in the release notes.
The SP3 update supports IBM's latest eight-core Power7+ processors, which rolled out last fall in some machines and this spring in others, and according to Eckermann has been updated so the kernel can boot on IBM's future Power8 chips, which are expected sometime next year.
Future UV shared memory systems from Silicon Graphics are also enabled with SP3, and although the release notes are not specific, that has to mean a variant of the UV 2 system using Intel's "Ivy Bridge" Xeon E5 chips expected in the third quarter, probably around September if the rumors are right.
The kernel supports other Ivy Bridge machines, of course, including future "Ivy Bridge-EX" Xeon E7 processors. Intel's current "Haswell" Core chips for PCs and Xeon E3 chips for servers and workstations are also enabled with SP3, as is the Opteron 4300 and 6300 processors from AMD.
On the server virtualization front, the latest KVM 1.4 and Xen 4.2 hypervisors have been pulled into SLES 11 SP3, and the virtual CPU and virtual memory limits of their virtual machines have been consequently expanded from the SP2 release.
KVM now supports up to 160 virtual CPUs per and 2TB of virtual memory per guest partition, with up to eight virtual network interfaces. There is no limit on the number of guests, but the total virtual CPUs across guest partitions cannot exceed a number that is equal to eight times the number of cores in the physical machine. (On a two-socket Xeon E5 server with 32 threads, that is 256 virtual CPUs max.)
The Xen hypervisor, still in use by plenty of SUSE Linux shops, can span 255 processors or threads (whichever is the larger number in the system) and span 2TB of physical memory. A virtual machine running atop Xen can access 32 virtual CPUs and 512GB of virtual memory.
Both the KVM and Xen hypervisors have been tweaked to support a bunch of new instructions in the Haswell family of chips, such as fused multiply add, 256-bit integer vectors, and MOVBE support, which will apparently help the performance of both hypervisors.
Microsoft and SUSE Linux have worked to bring the memory-ballooning support for virtualized memory to SLES 11 SP3 when it runs atop Microsoft's Hyper-V hypervisor. Hyper-V also now supports host-initiated guest backups using the Windows VSS framework, and the Hyper-V Vmbus protocol that is a virtual link between the hypervisor and guest has been brought up to the Windows Server 2012/Windows 8 level. This is a more efficient implementation of that interconnect protocol, SUSE Linux says.
Linux containers, the virtual private server alternative to hypervisor virtualization, has been updated with the latest patches, as well. LXC, like Solaris containers on Sparc and x86, Parallels on x86 iron, and workload partitions on AIX, have a shared kernel and file system underneath a bunch of operating system runtime sandboxes that are logically separated from each other.
With SP3, SUSE Linux is adding the Oracle Cluster File System 2 (OCFS 2) file system to the stack of supported file systems, which includes ext3, ReiserFS 3.6, XFS, and Btrfs. Eckermann says that customers are using XFS and ReiserFS in production address more than 8TB in one file system. XFS, which has been in SLES for nearly eleven years now, is the preferred file system for those with heavy loads and parallel read and write operations (such as serving up Samba or NFS). The ext4 file system is fully supported in SP3, but only as a read-only file system, and the company would much prefer that you use Btrfs.
Speaking of which, Btrfs – which first appeared in production-grade from with SLES 11 SP2 a little more than a year ago – has been patched with the latest updates. "Many people are looking at it, but not many people are putting it into production. But this is typical," says Eckermann.
There are no major technology previews in SLES 11 SP3, but there are a few smaller ones. The KVM hypervisor is being shown in the early stages running on IBM mainframes. Nested virtualization support, which was already available for AMD's Opteron processors, is now in preview for Intel Xeon chips that sport the VT virtualization extensions.
SUSE Linux is also rolling out the libguestfs tool, which is used to access and modify virtual machine disk images. Hot-add memory for Xeon, Itanium, and Power machines is still in tech preview, too.
All in all, there are 70 driver updates with SP3 in addition to the tweaking of the kernel to support the new processors and related chipsets, says Eckermann. These includes a slew of 8Gb/sec and 16Gb/sec Fibre Channel storage adapters, 10Gb/sec and 40Gb/sec Ethernet adapters, and the Open Fabrics Enterprise Distribution (OFED) 1.5.4 open source driver set for InfiniBand and Ethernet adapters to bring Remote Direct Memory Access (RDMA) support to those adapters. RDMA reduces latency by allowing network cards to access the main memory of other servers on the network without going through the network stack in the operating system.
You can see a full hardware compatibility list here. ®
Sponsored: RAID: End of an era?