Feeds

Rackspace: Why we're designing our own cloud servers

Just what will it take to compete with Amazon and Google

Security for virtualized datacentres

Exclusive Any cloud computing provider that wants to operate at scale and compete against its peers is under pressure to build some kind of custom hardware. It may, in fact, be necessary to compete at all.

That is what Rackspace, which is making the transition from website hosting to cloud systems, believes. And that's why the San Antonio, Texas-based company started up OpenStack - the open-source cloud controller software project - with NASA nearly three years ago, and accepted an invitation from Facebook to join the Open Compute Project, an effort by the social network to design open-source servers and storage and the data centres in which they run.

Rackspace, which was founded in 1998, grew up just as Linux and rack-mounted off-the-shelf servers were starting to make their way into data centres in big numbers, but the company had not been fully commercialised yet. And its early machines reflected that.

"What most companies did was colocation," said chief technology officer John Engates, referring to the practise of renting data-centre space, and paying for power and internet connectivity, in order to get a server onto the web. Engates was a founder and manager of Internet Direct, one of the original internet service providers in Texas back when the 'net was being commercialised in the mid-1990s.

"We took the model of putting servers up on racks very quickly and turning them on in 24 hours and we called it managed hosting. At the time, all of our founders at Rackspace were Linux geeks and they were all do-it-yourselfers, and they were literally building white-box servers. They were buying motherboards, processors, and everything piecemeal, and we assembled these tower-chassis form-factors on metal bread racks and it was really not very sexy."

Rackspace CTO John Engates

Rackspace CTO John Engates

The description sounds precisely like early Beowulf clusters based on cheap PCs or tower servers, halls of machines powering the first dot-com boom, or indeed the early generations of hardware at search engine giant Google. After a few years, Rackspace decided to chase enterprise customers to do their managed hosting, and that meant shifting to higher-end gear.

"We mimicked what the enterprise would do in their data centre to go win business from those enterprises," said Engates. "Enterprises didn't want to think they were being put on a white-box, homemade server. They wanted a real server with redundant power supplies and all that fancy stuff."

Rack servers evolved and matured, giving much better density than a bunch of tower machines stacked on bread shelves, and Rackspace started buying Dell PowerEdge 2650s for the first generation of enterprise-grade kit and then 2850s for the second generation. Today, in its managed hosting business, the split is about 60 per cent Dell iron and about 40 per cent Hewlett-Packard iron, and all of it is, of course, x86 machinery.

Now fast forward to a couple of years ago, and cloud computing gets under way. Instead of dedicating a server to a customer, each machine is thrown a hypervisor that slices up its processing abilities and memory capacity, and clients are sold access to a pool of these CPU and RAM chunks to run their Windows or Linux workloads on demand.

"Now," said Engates, "we are basically back to our own designs because it really doesn't make a lot of sense to put cloud customers on enterprise gear. Clouds are different animals – they are architected and built differently, customers have different expectations, and the competition is doing different things."

At first, when building its public cloud computing service, what Rackspace focussed on was getting custom gear from Dell and HP that better fit its needs. The web biz had the two vendors get all of the gear configured and cabled up in racks to make it easier to buy server and storage capacity and roll it right into the data centre so it could be given power and network and start doing useful work straight away.

And then Frank Frankovsky, vice-president of hardware design and supply chain at Facebook, invited Rackspace to join the Open Compute Project (OCP)'s open-source computer design efforts a little more than three years ago – by sending Engates a message through Facebook, of course. And from that moment, Rackspace has been moving more and more towards self-sufficiency for server and rack design.

Monitor ports, DVD drives, pretty LCD panels, all in the bin

What is good for Facebook is not perfect for Rackspace, as the latter explained at the Open Compute Summit back in January, but the basic rack and server designs can be tweaked to fit the needs of a managed hosting and public cloud provider.

The first OCP machines for servers and storage roll out in the Rackspace data centres in April; Wiwynn and Quanta are building servers and Quanta will build a just-a-bunch-of-disks (JBOD) array that better suits the needs of Rackspace than the giant winged beast that Facebook invented for itself and opened up.

"Everything that is in our multi-tenant business is some non-standard server or storage architecture," said Engates, and that can mean something cooked up by a specialist hardware manufacturer or the custom server business units of Hewlett-Packard or Dell. Most of the dedicated hosting is done on plain vanilla, enterprise-class servers, still.

"But that may change over time because we count private cloud in that category and we do have plans over time to offer Open Compute-powered private clouds. So even in the dedicated business, it is likely to be non-branded gear over time."

The vanity-free design is something that appeals to Rackspace for the same reasons as it appealed to Facebook, and indeed, is why Google started making its own servers many years ago. If you are never going to plug a monitor into a machine, why bother with a console port? You don't need CD-ROMs nor DVDs, either, and forget that front LCD panel. All of these things block airflow, add cost, and are a potential point of failure (either hardware or software) in the server and should be eliminated.

"The goal is to use OCP designs in more locations and to have a lower number of SKUs and fewer parts to stock, and therefore as we increase the number of servers that we buy we can lower the cost," said Engates. "We also improve our ability to maintain them by having fewer machines to train people on; as people understand the machines and get familiar with them, it is easier.

"You homogenise the data centre as much as you can because homogeneity in the data centre is a good thing, you want fewer moving parts in your data centre design and operations, and this is one of the means of getting there. And one of the beautiful things about Open Compute is that we remove things from the servers that we don't need."

Choosing a cloud hosting partner with confidence

More from The Register

next story
Phones 4u slips into administration after EE cuts ties with Brit mobe retailer
More than 5,500 jobs could be axed if rescue mission fails
Driving with an Apple Watch could land you with a £100 FINE
Bad news for tech-addicted fanbois behind the wheel
Phones 4u website DIES as wounded mobe retailer struggles to stay above water
Founder blames 'ruthless network partners' for implosion
Sony says year's losses will be FOUR TIMES DEEPER than thought
Losses of more than $2 BILLION loom over troubled Japanese corp
Radio hams can encrypt, in emergencies, says Ofcom
Consultation promises new spectrum and hints at relaxed licence conditions
Why Oracle CEO Larry Ellison had to go ... Except he hasn't
Silicon Valley's veteran seadog in piratical Putin impression
Big Content Australia just blew a big hole in its credibility
AHEDA's research on average content prices did not expose methodology, so appears less than rigourous
Bono: Apple will sort out monetising music where the labels failed
Remastered so hard it would be difficult or impossible to master it again
prev story

Whitepapers

Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
The next step in data security
With recent increased privacy concerns and computers becoming more powerful, the chance of hackers being able to crack smaller-sized RSA keys increases.