Original URL: https://www.theregister.com/2010/10/21/openstack_austin_release/

OpenStack unfurls first full cloud fluffer

Compute and storage launch control, NASA style

By Timothy Prickett Morgan

Posted in Channel, 21st October 2010 14:54 GMT

The OpenStack project, which has NASA and Rackspace Hosting and now 35 other organizations co-developing computing and storage controllers for cloudy infrastructure, has launched its first release, codenamed Austin, right on schedule.

While OpenStack warns that the product is not yet ready for primetime - think of it as a preview or developer release - both NASA and Rackspace have the Austin code running in production already.

When the OpenStack project debuted back in July, both NASA and Rackspace were individually wrestling with were the limited scalability of the cloud controllers to manage farms of virtualized servers and NASA was concerned that its cloud fabric of choice, Eucalyptus, was not scalable enough and was not fully open source. And so NASA started coding its own homegrown cloud fabric controller, called Nova, which it open sourced.

Rackspace coincidentally approached NASA about working together, and the two decided to form OpenStack, which takes NASA's Nova and some bits from Rackspace's own Ozone controller and mixes it with Rackspace's Cloud Files cloudy storage controller, now called OpenStack Object Store and nicknamed Swift, to create a unified compute-storage cloud.

Different hypervisors plug into the cloud controller to do the actual server virtualization, and both organizations as well as the growing list of OpenStack partners want to be a kind of Switzerland for x64-based clouds, with a full set of open APIs that allows different hypervisors and tools to plug into the OpenStack controllers.

The Austin release is the first code from OpenStack to include the merged Nova and Ozone cloud controller code, says Jim Curry, vice president of corporate development at Rackspace and general manager of the OpenStack project. The Swift code, written in Python, was released when OpenStack launched and is production-grade code already.

The Nova cloud fabric controller in the Austin release relies heavily on the original Nova code from NASA, but there have been a bunch of changes in its Python code. The original Nova controller only supported the KVM hypervisor that is controlled by commercial Linux distributor Red Hat, but now the updated controller, which still bears the Nova name, can support the open source Xen hypervisor through interfaces to the libvirt tool (also shepherded by Red Hat) as well as the full-on XenServer hypervisor (thanks to work from Citrix).

Interestingly, as El Reg previously reported, Oracle's type 2 or hosted hypervisor can also be controlled by the Nova controller, although Jonathan Bryce, the tech strategist at the OpenCloud project and founder of Rackspace's own cloudy infrastructure biz, says that VirtualBox is not a first-class citizen in the ranking of server hypervisors.

By the way, User Mode Linux, an alternative means of hosting multiple Linux instances on a single Linux operating system, is a first-class citizen as far as the Nova cloud fabric controller is concerned, now that support for UML has been added with the Austin release.

Less is better

The Austin release includes a server image management system called Glance, which as you might expect given the name is a graphical user interface to help administrators as they wrestle with thousands of physicals servers and countless virtual servers and the storage allocated to them. Glance allows admins to take snapshots of running VMs and store them out onto the storage clouds controlled by the Swift storage controller.

Nova was originally coded to make use of local storage inside of physical servers, not a storage cloud, but did have APIs that allowed it to mimic Amazon's EC2 compute and S3 storage clouds, so in theory data could be stored out there on the Amazon cloud and Nova could control Amazon server images. The Glance tool can similarly interface with S3 and other external storage clouds, if companies want to point their VMs to external storage or just park backups of server images out there.

You might be thinking that a cloud fabric controller and a graphical user interface implemented in Python would have lots of code, but according to Bryce, the two programs combined have under 10,000 lines of code. "Less is better," Bryce quips.

The Austin release sees a bunch of other changes, aside from dropping porting portions of Nova that were written in C and C++ to Python, adding more hypervisors, and bolting on the Glace interface between OpenStack compute and storage clouds. The original Nova code made use of the Redis distributed key value store of metadata relating to compute and storage instances, but Bryce says that this has been replaced with SQLAlchemy, which is a database extraction layer that is part of the Python toolkit that will allow for metadata to be stored in MySQl, PostgreSQL, or even SQL Server if you want.

The Austin release retains the EC2 and S3 APIs in the original Nova cloud fabric controller, but it also has a whole new set of APIs that tie directly into OpenStack features that are not part of EC2 or S3. While people have been making some noise about OpenStack abandoning the Amazon-compatible APIs, Bryce says that in fact some Rackers have been making enhancements to the EC2 and S3 features for the Austin release.

While both NASA and Rackspace are using the Austin release of the Nova and Swift code in production, Bryce does not advise service providers or enterprises eager to build clouds to jump the gun. Because these two companies built the code and are intimate with it, NASA and Rackspace can do this across thousands of servers that they already have in production.

The code is perfectly fine for doing proof of concepts and testing on modest-sized clouds. Rackspace does not plan to move fully over to the OpenStack code until the second quarter of 2011, a couple of releases from now, when it will have the scale that the hoster requires for its cloudy infrastructure.

The design goal for OpenStack, as El Reg has divulged, is for it to control one million servers and 60 million virtual machines.

The OpenStack project is hosting its next design conference in San Antonio in November, where it will hammer out the feature set for the next release, nicknamed "Bexar" after the county where that Texas city resides, due in January 2011. The hot topics, no doubt, will be adding support for VMware's ESX Server and Microsoft's Hyper-V hypervisors, which are both necessary for OpenStack to be as ubiquitous as its proponents want it to be. ®