This article is more than 1 year old

Ex-NASA OpenStackers launch Nebula cloud control freak appliance

Forget OpenStack software disties, says OpenStack co-founder Kemp

Chris Kemp, the former NASA CTO who helped build the wonking Nebula infrastructure cloud for the US space agency and the techie from the NASA side who spearheaded the development of OpenStack along with Rackspace Hosting, knows about as much about control-freaking clouds with OpenStack as anyone else on the planet – and that's why he founded a company called Nebula that seeks to make private clouds easier to build and operate.

After 18 months of development and just ahead of the rollout of the "Grizzly" release of OpenStack, Kemp's Nebula is ready to bring its OpenStack control-freak appliance to market. The machine, which Kemp calls "the cloud computer" and says is a "completely new kind of computing system," has one simple purpose: to make a private cloud something that you plug in and turn on rather than build.

The Nebula One Cloud Controller, technically known as the CTR-1500, is a bit more dense and more capable than the prototype that Kemp was showing off when he jumped ship from NASA to start Nebula in July 2011. The production machine does not push the scalability limits inherent in the Nebula One design, but rather starts out with a fairly modest private-cloud setup and gives Nebula a chance to ramp up its sales and support organization to meet demand for ever-larger private clouds based on OpenStack.

The Nebula One appliance marries a 10 Gigabit Ethernet switch with an x86 server equipped with a hardened and complete OpenStack controller software stack, all sealed up and pretuned to work with specific servers from HP, Dell, and IBM. Just like Cisco Systems has mashed-up network switching and systems management inside of a switch for its Unified Computing System blade servers, Nebula is mashing-up switching and OpenStack into a single controller that can provision server, storage, and networking slices for virtual machines and launch them into production.

The plan a year and a half ago was to try to encourage the use of Nebula One appliances with open source servers based on the Open Compute Project designs, but that desire was a bit ahead of market reality, and more importantly, companies like Facebook do not virtualize their servers to begin with and hence have no use for an OpenStack control freak.

Hyperscale cloud operators may start virtualizing at some point if it helps them with system management or other operational aspects of their iron, but their workloads operate at a very different scale from the rest of the IT community, and they have other ways of dealing with moving workloads around their systems.

Nebula OpenStack appliance prototype

The prototype 4U Nebula OpenStack appliance from July 2011

The Nebula One controller fits in a 2U chassis (twice as dense as planned) and has its 48 10GE ports pointing out of the rear of the chassis instead of in the front as in the original design. Kemp tells El Reg that the company has chosen Intel's Fulcrum ASIC for its switch, although he was cagey about which one.

The appliance also has two Opteron G34 sockets – see, people still use x86 chips from Advanced Micro Devices – and while Nebula isn't being precise about which one, it does say that it has two 1.6GHz processors with 16MB of L3 cache each with 85 watt thermals, and that means the company has chosen the Opteron 6262 HE low-voltage part, if you look at all the Opteron 6300 and 6200 possibilities.

The appliance has 64GB of main memory plus a 32GB SuperCache MLC mSATA flash drive to cache frequently used OpenStack files. The appliance also has a 256GB 2.5-inch MLC solid state drive, an old-fashioned 1TB 7200rpm disk drive to store log files and other infrequently accessed data, and two 650-watt power supplies.

The Nebula One cloud controller appliance

The Nebula One cloud controller appliance

The iron is interesting, of course, because it shows what smart OpenStack people think is sufficient iron to run an OpenStack controller. But the software that Nebula has cooked up is the real important bit, says Kemp.

Nebula starts out with a base Linux operating system and puts OpenStack on it plus the KVM hypervisor to create what it calls the Cosmos cloud operating system. This is not just any old OpenStack, but one which Nebula programmers – many of whom worked on the "Nova" compute controller at NASA and then on the OpenStack project proper – have ginned up with a homegrown set of user interfaces called Resource Manager.

Cosmos is based on the current "Folsom" release of OpenStack, but has backports of features from the Grizzly release already tested and pulled into the Nebula release. The way Nebula does this is that it has preconfigured clouds based on certified hardware from HP, Dell, and IBM all built and running in its labs, and because Nebula's coders are so acquainted with OpenStack, they know when to pull a new feature into testing and then roll it into production.

"The Nebula releases will be completely independent of the OpenStack releases," explains Kemp, because the company wants to keep control of the pace of innovation it rolls out, getting features out as soon as they are ready, not once every six months.

This is precisely the way Red Hat made Enterprise Linux commercial-grade in the early years by backporting features in a future Linux kernel into a current one. Sometimes, you can't wait for the community.

More about

TIP US OFF

Send us news


Other stories you might like