Morphlabs to cloud builders: Osmium is kryptonite to Amazon, Rackspace
Weaken them, and get 60 per cent gross margins
Cloudy infrastructure provider Morphlabs is revving up its private cloud iron while at the same time wooing service providers with add-on software that makes it easier to run the OpenStack cloud control freak to create public clouds.
No one who can do math and configure and run systems well would argue that compute and storage capacity on public clouds is necessarily cheaper to buy over the long term than in-house infrastructure. But despite this, not every company has the skills to build and run a cloud, or wants to invest in those skills – and that's why Amazon Web Services, Rackspace Cloud, and other services are taking off.
As a result, every service provider that used to have a tidy hosting business is now freaking out, trying to find a way to build their own cloud without having to create their own stacks as Amazon, Rackspace, and others have done.
If it were as easy as loading up OpenStack or CloudStack cloud control freaks on a cluster of naked servers, everyone would have done it already. Getting a cloud up and running is not all that simple, so Morphlabs has cooked up a special bundle of its own add-on software called mCloud Osmium to help service providers build clouds on top of Dell rack servers. The stack also helps them do that with pricing that allows them to make a considerable profit margin and take on Rackspace and Amazon in terms of cloudy server slices.
A double Helix
Morphlabs is a bit of a shape-shifting company – you have to be when chasing the cloud market. The company was founded in 2007 and sold a cloud management tool called Appspace that control-freaked Amazon's then-nascent EC2 compute cloud.
In April of last year, Morphlabs worked with server partner Dell, switching partner Arista Networks, and storage partner Nexenta to forge a "cloud in a box" running the OpenStack cloud control freak called mCloud Rack Enterprise Edition. Only a few months later, Morphlabs launched a cloudy system called Helix that was designed to put servers and storage in the same hyperscale chassis, and to also run the OpenStack cloud control freak.
Specifically, the Helix 1.0 system had four processor nodes, each with two Xeon E5-2600 processors. Three nodes were configured with 64GB of main memory and four 256GB Samsung 830 series SATA III MLC flash drives. The flash drives were protected by RAID 10 string across mirrors.
One node was configured to be used as an OpenStack Nova compute controller and two were designed to be compute nodes hosting KVM hypervisors. The fourth node had only 32GB of memory and four 1TB disks that run the NexentaStor storage software.
This storage node runs uses the ZFS file system and has de-duplication software in it as well as hooks into the Nova-volume service that emulates Amazon's Elastic Block Storage (EBS) service. The storage node also had one 256GB flash drive that was used as an L2ARC cache for the ZFS file system and another 256GB flash drive used as a ZFS system pool.
That was 18 out of 24 slots in the Dell chassis all filled up, and left room to add six more drives as necessary to the compute or storage nodes.
The Helix 2.0 system from Morphlabs
According to Yoram Heller, vice president of corporate development at Morphlabs, that original Helix 1.0 system could handle around 80 virtual CPUs (vCPUs) of compute with 2GB of virtual memory (vRAM) per vCPU. It provided 3TB of very fast SSD storage and could scale up to 500 vCPUs if you clustered a bunch of them together.
Morphlabs sold a single Helix 1.0 system at a suggested retail price of $75,000 plus $10,000 per year for support of the entire stack, for a total of $115,000 over four years. That works out to $359 per vCPU per year.
With the Helix 2.0 system that has just come out, Morphlabs is switching to slightly beefier two-socket Xeon E5 nodes for the OpenStack controller and compute nodes. These nodes have faster processors, and 128GB of main memory each. The storage node configuration stays the same.
The good news is that the new Helix 2.0 setup can support up to 200 vCPUs of compute – a factor of 2.5 improvement over the Helix 1.0 – and has twice the memory on each compute node, all in the same 2U footprint. The Helix 2.0 also costs a little less, with a suggested street price of $107,000 and an annual premium support contract of $9,000 per year. Over four years, that works out to $179 per vCPU per year. Admittedly, without seeing the precise CPU configurations, it is hard to say if that 200 vCPU number is reasonable, but that's the marketing pitch.
To expand the Helix 2.0 system into a private cloud, Morphlabs suggests you buy two of these Helix 2.0 boxes and cluster them in a warm-standby mode, and then buy all-compute node compute expanders, which yield 400 vCPUs across four nodes, and all-storage storage expander nodes, which have 15TB of capacity.
Both machines come configured with the Ubuntu Server variant of Linux from Canonical and the "Folsom" release of OpenStack; the support contract covers this software.
Something easy for service providers
The Helix 1.0 and 2.0 machines were good for internal private clouds, but do not have the compute density and other billing and management features that service providers need, and so Morphlabs has gone back to the drawing board (and to Dell for its server iron) and cooked up a rack-based Osmium setup that is distinct from the Helix systems, and is a better fit (for now) for service providers who like rack systems and who are looking for a billing and management system add-on for the OpenStack cloud control freak, as well.
Service providers need to compete with Amazon and Rackspace on price, after all, and server density is not as important as the bottom line. More importantly, they already have their own relationships with server makers and their distributors to negotiate volume pricing on iron.
And to that end, Morphlabs has come up with a reference architecture of sorts based on a rack server design that service providers can buy at Dell and then run the Osmium stack on to build their public clouds.
The Osmium setup has a two-socket 1U rack server as a master controller, and a similarly configured 1U compute node as the basic unit. This compute node has 128GB of main memory and can handle about 100 vCPUs. The precise configuration of the machine was not available at press time, but Heller did tell El Reg that it uses Intel's new S3700 flash drives.
That basic building block – one rack master controller and one rack compute node – will cost around $17,000 if you buy it online from Dell. If you want to build a proper public cloud, you get a redundant master controller and then add in the storage expander nodes based on the Helix 2.0 setup and compute nodes based on the rack servers. The Osmium configuration can scale OpenStack to a total of 4,000 vCPUs in a single management domain.
Morphlabs is not selling the hardware here, but rather the software. The starter setup of the Osmium software costs $1,000 per month for service providers, and includes the OpenStack controller with the Osmium self-service portal, billing, and management modules. It also includes hooks into the Stripe online payment gateway.
Here's how it works: you buy a base license to the Osmium stack, and then you pay an additional fee for every Osmium compute or Helix 2.0 storage expansion node you add. It costs $10 per vCPU per month for the compute, and $100 per TB per month for the storage to have it managed by the Osmium tools, and that includes support for the underlying OpenStack and Ubuntu Server as well in the cloudy cluster.
Hardware might cost under $10 per vCPU (with a chunk of storage) per month on top of that, so if you charge $40 a month, the rest is your gross margin. The lower you get your hardware cost, the better your margins –presumably there's some give in that Morphlabs Osmium license cost, too.
The Osmium rack system tailored for service providers to run OpenStack
Heller says that the goal of the Osmium stack and the underlying hardware is to empower the several thousand small and midrange service providers in the world – which generate somewhere between $8bn and $10bn a year in revenues – who cannot directly compete with the likes of Amazon Web Services with their own engineering departments.
"You can't build another Amazon to take down Amazon," says Heller. "But you can arm a hundred companies to all start taking shots at Amazon."
With the combination of Dell hardware, Morphlabs software and support, and OpenStack, Heller says that service providers can provide better performance than Amazon Web Services and get a 60 per cent or higher margin at the same price that Amazon is charging for capacity.
How Morphlabs stacks up its Osmium OpenStack system against Rackspace and Amazon
The above comparisons are not exactly apples-to-apples in terms of vCPUs, vRAM, and local storage, but they're reasonably close if you look at performance – which is actually what matters.
Morphlabs ran the ancient but updated UnixBench system test on the three virty slices shown to normalize them. You don't need a PhD in math to see that the flashy mCloud small instance had the highest performance, the highest I/O operations per second out of its storage, and the lowest price per month.
Not everyone wants Dell gear, of course, and that's why Morphlabs is working on other partnerships to get the Osmium stack certified on other iron. The Cisco Systems UCS machines seem like an obvious choice, as does HP's hyperscale SL6500 tray servers and ProLiant DL rack servers. AMD SeaMicro and Open Compute machines would make the most sense of all, but Heller isn't saying who Morphlabs will pick next for its hardware partners. ®
skills to build a cloud
From the article -
"But despite this, not every company has the skills to build and run a cloud, or wants to invest in those skills – and that's why Amazon Web Services, Rackspace Cloud, and other services are taking off."
It's a wide spread belief that it is simpler to operate in something like the Amazon cloud vs doing things yourself. It's also a spreading belief that organizations actually need "cloud". Most do not, the vast majority do not.
But the real point of my comment is again I want to call out the fact that it requires MORE expertise to properly operate in the amazon cloud vs building your own stuff (e.g. not trying to build your own openstack shit but going more traditional configuration). Because you have real load balancers, you don't have to use a "built to fail" model. You have network configurations that can last for years (e.g. a VM or a host fails, it gets repaired or moved to another host and off you go again - no need to rebuild things and re-configure things due to failure).
Having spent the past decade or so working with infrastructure at internet-facing companies and up until the middle of last year had spent the better part of 2 years working with the horse shit that amazon has in their infrastructure I can say the level of stress alone is worth going it alone. Cost savings, it's comical how much more any public cloud costs vs doing it yourself.
It is common now finding people with the skills to build such infrastructure from the ground up is difficult, most CxOs and other management don't realize the number of gotchas especially when dealing with Amazon cloud, yet alone the other cloud players.
There's a reason why every time amazon has an outage in their US east reason a million different websites go down..
also whoever thinks m1.small amazon can get 14,000 IOPS needs to go back to school. That's roughly 56 x 15k RPM disks.. yeah, I don't think so.