Appliances are the new data centre onesie

It's all coming together

Fritz Pfleumer with his magnetic tape recorder

It has been a fun and very profitable couple of decades for upstart IT server and systems software makers.

They have thrown new server technologies at venerable mainframe and minicomputer systems and blasted the data centre into a thousand shiny metal bits. Then they lashed it all together with networks running distributed workloads and scaling horizontally as much as vertically to make a less expensive and more flexible IT infrastructure.

Well, that was the idea. But like many ideas, distributed computing has caused as many problems as it has solved. And now server makers and their software partners are looking to recreate a mainframe or a minicomputer out of piece parts, with all the advantages of a single-sourced, pre-integrated product and without any of the lock-in and lack of choice issues.

Data centres want to have it both ways and vendors are willing to give it to them both ways. They can have best-of-breed components to cobble together or finished appliances – converged systems, engineered systems, unified systems, whatever you want to call them – that are pre-integrated, pre-tested and ready to roll into the data centre by the rack.

Plug and play

You plug in the power cords, you plug in the network cables and you start loading up applications. It is supposed to be no more complicated than plugging in a toaster.

Vendors love appliances not only because they potentially get them more share of wallet, but also because a pre-integrated system with a limited set of components is easier and cheaper to support over a set of customers.

Teradata cobbled together data warehousing appliances many years ago, although they were not called appliances then. And Oracle bought Sun Microsystems three years ago precisely because it saw what it could do in terms of tuning its application software for hardware and vice versa.

Precise configurations with very little choice is the central premise of the Oracle Exadata database and the Exalogic middleware clusters. They are based on Intel Xeon processors running Oracle's variant of Linux and the Sparc SuperClusters, which are based on Sparc processors and Solaris and can run both database and middleware.

Ditto for IBM's PureFlex infrastructure stacks, which come in x86 and Power variants for running basic infrastructure workloads on virtualised or physical hardware on either Linux or AIX.

Pick a partner

Not everyone has a complete stack of operating systems, middleware, databases and underlying hardware that they control from the processor out to end-user and back to the disk drive again, the way IBM and Oracle have. So others partner.

Hewlett-Packard, Dell, Cisco Systems and Fujitsu do not control the processors or the systems software in their x86 servers, but like IBM and Oracle they are happy to partner with Microsoft, Red Hat, VMware, SAP and a handful of core systems suppliers to create specific systems (and they are starting to call them systems again) that do specific jobs in the data centre.

Appliances are being made for any number of jobs, ranging from running databases and middleware to various kinds of data warehousing and analytics (using Hadoop and other NoSQL data stores) to delivering raw virtual machines using a cloud controller (OpenStack married to KVM, Windows Server married to Systems Centre and Hyper-V, and vSphere and vCloud married to ESXi seem to be the three popular choices on x86 iron).

When public clouds took off at Amazon and Microsoft, it seemed the logical thing to do was to offer a carbon copy of the infrastructure on those public clouds so companies could install the same thing in their own data centres as a private cloud. If the cloud is essentially a big virtual appliance, such a strategy converts it back to a hardware appliance for inside the corporate firewall.

Amazon is firmly against private clouds (despite its partnership with Eucalyptus Systems to do EC2-alike private clouds), believing that cloud means a public cloud run for you by someone else.

Rackspace Hosting will build you an OpenStack-powered cloud and help you install it. In this case the entire private cloud becomes, in essence, an appliance that Rackspace can operate for you if you pay for the managed services contract. (Somewhat ironically, though, Rackspace is pairing OpenStack with the XenServer hypervisor on its public cloud but is using the OpenStack-KVM combination on the private cloud appliance version. Go figure.)

Letting go

Microsoft has yet to let people buy the exact same iron to run an Azure compute or platform cloud in their data centres, but that was the plan at one point. It still seems a good idea, particularly if it is something that Amazon is unwilling or unable to do because of open-source software licensing restrictions.

The way GPL licenses work, you can take open source code and modify it without contributing the modifications back to the community if you only peddle a service; but once you distribute a product based on that modified code, you have to let go of that code and share it with the world. Microsoft could give customers an Azure appliance and not care, since it controls its own code.

Back in the summer of 2010, Microsoft announced with server partners Dell, HP and Fujitsu that it would offer private versions of its Azure cloud in appliance form for customers to install in their data centres, as well as hosted versions run by those three vendors.

Thus far, none of the Azure appliances have seen the light of day as a privately installed product. Fujitsu did roll out its Azure-based Global Cloud Platform in the summer of 2011. This cloud is just like the real Azure, complete with raw compute and storage service, SQL Server database services and AppFabric connectivity services, only it runs on Fujitsu servers and storage in a Fujitsu data centre.

What Microsoft has concentrated on instead has been decision support and data warehousing appliances with server partners Dell and HP. There are five such machines, two from Dell and three from HP. Most of them are still based on SQL Server 2008 R2, but one has been updated for SQL Server 2012 and other updates will almost certainly follow.

Dell's Quickstart Data Warehouse Appliance 1000 was announced last July. It put Microsoft's Windows Server 2012 operating system and SQL Server 2012 relational database onto a PowerEdge R720 rack-based server with 64GB of main memory and 18 300GB disk drives to create a "database in a box".

The system has two mirrored 600GB disks in the rear of the server where the systems software is installed and the drives in the front of the machine are used to hold data.

Fast Tracked

The machine uses the Data Warehouse Appliance Edition of SQL Server, which has all the goodies of SQL Server 2012 Enterprise Edition, including xVelocity ColumnStore indexing capability and Remote Blob (binary large object).

This special edition of the database software has also been put through Microsoft's Fast Track reference architecture integration testing, which means everything on the box is guaranteed to work together. No fuss, no muss.

Dell is also tossing in a bunch of services, including setup, hands-on training, post-installation checkup, quarterly health checks and access to Dell's Boomi data integration services. This Quickstart Data Warehouse Appliance 1000 has a list price of $69,990, all-in.

Dell's Parallel Data Warehouse Appliance and HP's Enterprise Data Warehouse Appliance and Business Data Warehouse Appliance (for smaller customers or data marts) are all based on Windows Server 2008 R2 and SQL Server 2008 R2.

The appliances are clusters of machines that scale from tens to hundreds of terabytes of database capacity and have the same Fast Track testing to make sure everything works like an appliance. You just plug in the power, plug in the network and start adding data.

The other appliance comes from HP and is called the Business Decision Appliance. It includes SharePoint Server 2010 as well as SQL Server and hooks into the PowerPivot extensions for the Excel spreadsheet.

Calculated risks

Given the benefits of an IT appliance in terms of pre-integration and simplification, the question is why are IT shops not clamouring to get them? One reason is that any shift in architecture in the glass house takes many years because of the long-term refresh cycles.

Another problem is lock-in. Even though IT appliances are for the most part made of commodity parts, they have the feeling of a proprietary system. It may not be so easy to convert the elements of an appliance into general-purpose servers, storage and switches.

The answer to that is to choose appliances that most resemble the general-purpose components you might otherwise buy.

This will not necessarily keep you from vendor lock-in

So, if you favor x86 iron and Ethernet networks with a Windows or Linux operating system, don't pick an appliance that is based on an Itanium, Sparc or Power processor that is running a Unix operating system underneath.

This will not necessarily keep you from vendor lock-in, particularly if IBM, Cisco or Oracle own the whole stack. What companies need to do is some maths. If the appliance costs less to buy and support and has expandable capacity, then the risk of lock-in is probably worth the rewards of simplification, integration, reduced support costs and speed to market.

That, at least, is what IT vendors are rubbing their hands over as they weld together their Engineered Systems, PureSystems, Unified Computing Systems, Converged Systems and Active Systems appliances, as well as other myriad variants for specific workloads.

The entire server racket won't be converted overnight to these appliances, so don't get the idea that we are going back to the mainframe and dumb terminals.

Profit motive

But in a flat market that will see a scant three-tenths of a point compound annual growth rate to $57.5bn in servers sales by 2016, converged systems, as IDC calls these IT server appliances, are expected to have a compound annual growth rate of 54 per cent between 2010 and 2016, reaching $6.8bn in sales. That is a quadrupling of the market in seven years and represents about one-eighth of the overall server market.

Most of that action will be for infrastructure appliances, according to Jed Scaramella, research manager for enterprise servers at IDC. About 70 per cent of those revenues will come from basic infrastructure appliances and the other 30 per cent from database and application appliances.

Those numbers include only the value of the servers in the appliances – not the networking, software, services and other parts of the stack. So the revenue stream is indeed large. Hence all the excitement among IT vendors.

It will take some serious engineering – mechanical, electrical, software and financial – to get customers used to the appliance approach. But smaller IT shops will probably be keen to simplify their IT infrastructure and get back to the real job of creating the applications that run the business. ®

Sponsored: Minds Mastering Machines - Call for papers now open

Biting the hand that feeds IT © 1998–2018