SUSE joins Canonical and Red Hat in using Ceph to puff OpenStack cloud
Gets out Crowbar to pop it in
Now that OpenStack is a technically viable infrastructure cloud controller, all of the main Linux distributors are lining up to have a go.
Some have embedded OpenStack in their Linuxes but others have chosen to create a separate OpenStack distribution that rides on their Linuxes, makes use of the KVM hypervisor, and will carry a price tag for commercial-grade tech support. But there are plenty of add-ons that need to be bolted onto OpenStack to make it a viable infrastructure cloud, and one of them is storage. And like its peers Canonical and Red Hat, SUSE Linux seems to be favoring the Ceph distributed object store for its SUSE Cloud OpenStack distro.
That's not a surprise, considering that the portions of Ceph have been picked up and adopted as its own by the OpenStack community. Ceph is everything but a floor wax and a desert topping. It is an open-source distributed object store that was designed to be distributed from the get-go, El Reg was told by James Duncan, CTO at Inktank, the developer and commercial support provider for Ceph.
Other object stores have had to have distributed architectures shimmed into them, but Ceph is distributed from the start. Also, its client has been accepted into the Linux kernel already.
Ceph was started by Sage Weil for his PhD thesis at the University of California and its development was initially funded by the US Department of Energy. It has since been championed by cloudy compute provider DreamHost, where Weil is a co-founder. DreamHost is currently running an object store based on Ceph that spans more than 3PB.
The interesting thing about the Ceph object store is that you can layer other things on it, as you would layer an application on top of an operating system. So, for instance, you can load up the Ceph distributed file system, which is akin to IBM's GPFS, Red Hat's Gluster, or Oracle's Lustre. (Oracle may have bought control of Lustre, but has left continuing development to others.) The most important Ceph features as far as OpenStack is concerned are the Rados block device and gateway. The Rados block device layer allows Ceph to look like block storage and the Rados gateway is a layer that emulates the Swift object storage controller that is paired to the Nova compute controller inside of OpenStack. The Rados gateway can also emulate the S3 object storage employed by Amazon Web Services in its public cloud.
SUSE Cloud, the OpenStack distribution created by SUSE Linux, the commercial Linux distro owned by Attachmate, launched back in August as a preview based on the "Essex" release of OpenStack, including some Ceph features. Doug Jarvis, program manager for SUSE Cloud, tells El Reg that full-on SUSE Cloud based on the "Folsom" OpenStack release is expected early next year.
The partnership with Inktank, which Weil formed this year to provide commercial support for Ceph, aims to ensure that companies which build infrastructure clouds based on SUSE Cloud have support for all the features in the cloud control freak.
With Ceph being such an important part of the cloud stack, SUSE Linux must have figured it was better to form a support partnership with Inktank than try to go it alone. As is the case in most of these support partnership agreements, SUSE Linux is providing Level 1 and 2 support for the Ceph bits used in SUSE Cloud, with Inktank providing Level 3 support. Financial terms were not disclosed.
At the moment, according to Jarvis, SUSE Cloud will include the Rados block device and S3/Swift gateway. But SUSE Linux has no plans to adopt or support the Ceph distributed file system. Should HPC clouds take off, SUSE Linux might want to reconsider that.
SUSE Linux and Inktank are also going to be working together to make sure that the Crowbar system deployment tool (created by Dell and used by a number of open-source projects at this point) can deploy SUSE Cloud and Ceph together in a smooth fashion. Crowbar and Chef are key components of the SUSE Cloud management server. ®
Sponsored: Hyper-scale data management