Intel: 'All your clouds are us inside'
Xeon and Atom in the eye of the storm
Intel is getting used to being the big chip on the data center campus, and it is not about to let upstart vendors peddling other chips (that means you, Advanced Micro Devices) or architectures (that means you, ARM Holdings and friends) move in on its server turf. Not without a serious fight, at least, and certainly not in the cloudy infrastructure portion of the server racket that is exploding.
Cloud computing may not be a completely a new way of moving bits and bytes around to do work, but just the same, incumbents can get pushed down to the far end of the data center feed trough as has been the case with prior data processing and information technology transitions. That goes as much for those who supply systems, storage arrays, operating systems as it does for those who make their sub-components, like processors and chipsets.
Intel has been vague about exactly what its cloud strategy is, and understandably so. You have to make a pretty big leap from Xeon and Atom processors and their chipsets to a cloud. There are layers of hardware and software and many partners who turn chips into clouds, and they are the name brands on the cloud fronts where the IT weather is happening, changing.
Still, Intel can't afford to just let server makers and cloudy software tool providers do their thing and hope for the best. It has to nudge, encourage, cajole, listen, and react to a wide variety of partners and competitors and do all that it can to make sure that no matter what happens, its chips are at the heart of the clouds. Or perhaps the calm in the eye of the storm. Pick your weather metaphor and amuse yourself.
That job falls largely to Jason Waxman, general manager of high density computing in Intel's Data Center Group. The cloudy parts of Intel are spread out across the company's several Beaverton, Oregon, campuses, which is also where Intel does a substantial amount of research into server designs, custom server manufacturing, chip research, and wafer baking on both research and production scales.
Last week, Intel hosted an event called "A Day in the Clouds" for members of the press, and El Reg sat in on briefings with Intel's top brass in the cloud organization and did a tour of the labs behind its Cloud Builder program, which puffs up reference architectures of hardware and software for a multitude of cloud computing scenarios. This was the first time that Intel put some numbers on the cloud phenomenon and articulated its role in helping IT customers transform their brittle machinery and software into something a little more manageable and a lot more like the dream of virtualized, utility computing that has been in development since the issues with cheap, distributed computing became apparent in the wake of the dot-com bust.
With x64-based machines accounting for more 97.4 per cent of the 2.38 million server shipments in the most recent quarter (according to Gartner) and with Intel having around a 93 per cent share of those shipments, giving the company somewhere north of 90 per cent of overall server shipments, you might be thinking that Intel might rest on its laurels. But the company's core philosophy, as espoused by founder and former chairman Andy Grove, is only the paranoid survive, and despite what the company says publicly about how no one is all that interested in super-low-power servers or takes the possibility of ARM-based servers all that seriously, you can bet that there are plenty of people who are paranoid about these and other possibilities inside Intel and that they are working feverishly to make sure the future that Intel wants for the cloud is the one we all move toward.
Intel started up its Cloud Builders program in the fall of 2009, and with the launch of the Open Data Center Alliance last October, the company ramped up the effort and put some resources behind it to help various cloudy tool providers build and test clouds using a mix of their wares and put together those boring old reference architectures that don't make for hot news but which can save IT shops a lot of grief when they go to build their own private clouds and start integrating them with public clouds.
Other Intel execs talked about the Cloud Builders program in detail, and El Reg will get into that separately. Waxman gave a higher-level view, first talking about the growth opportunity that Intel was very keen on not missing. This is now known as Intel's Cloud 2015 Vision, something it touched on a tiny bit back when the ODCA was launched last October.
Waxman said that Intel has been working with cloud software and service providers for the past four years to come up with this vision thing, and threw around some statistics as general managers are apt to do. Waxman said that the Intertubes would be adding 1 billion more "netizens" by 2015, and that four years from now, there would be more than 15 billion connected devices linked to the Internet - four times what we have today. Intel has also extrapolated some data from networking giant Cisco Systems and believes that in 2015, there will be over 1 zettabyte of data moving over the Internet.
That's 1 million petabytes or 1 billion terabytes, depending on how you want to think of it. The data growth is being driven by every richer and more human data formats. In 2010, said Waxman, more data moved over the Intertubes than cumulative moved over the network since it was built through 2009.
"This is a tremendous amount of growth," Waxman said, and added that cloud service revenues are expected to grow at more than a 20 per cent compound annual growth rate between 2009 and 2014. (Those are Gartner figures.) No one said anything about anyone making money on all this traffic - look at how hard it is for Rackspace Hosting and Terremark to make a buck - but all that network traffic will probably make Intel some dough.
All your clouds are Intel Inside
It looks like system administrators will also be keeping their jobs unless cloudy infrastructure management tools make a quantum leap. Waxman pulled out a statistic from Bain & Company, the consulting firm that is not related to the private equity firm of similar name (Bain Capital), that estimates that between now and 2015, IT organizations worldwide will spend $2 trillion on server, storage, and network deployment unless virtualization of these components improves and tools to manage them scale and get easier to use. Just some modest improvements, says Waxman, can result in about $25 billion in reduced annual IT spending by 2015.
Intel's own prognosticators have sat down and looked at how the cloudy infrastructure market (as distinct from general purpose computing) will play out. Waxman said that Intel estimates that, all things remaining the same, storage use attached to cloudy infrastructure would see a factor of 16X increase between now and 2015, and networking capacity would have to grow by a factor of 8X to keep up with fatter data and more users banging away on clouds to get that data. Under present course and speed, Intel estimates that raw computing capacity will see the largest growth, a factor of 20, according to the Intel marketing wizards.
It's enough to make an Intel shareholder giddy, and maybe even make an AMD shareholder hold out for some hope.
Cash cow and the piranhas
Those kinds of numbers are also the ones that attract competitors like piranhas to a cash cow that has wandered into the warm waters of the Amazon (puns intended) to be stripped of its flesh, down to the bone. If there isn't a quad-core or eight-core ARM-based chip code-named "Piranha," there should be, just to make Intel twitch.
The Cloud 2015 Vision is not just about chasing the exploding cloudy infrastructure business, or more accurately, the transformation of static server images on physical boxes to mobile, virtualized images that can move around a company or jump the firewalls to frolic on public clouds if they are given permission to. Intel is working with partners to create federated clouds that can share data securely across public and private clouds, to automate the clouds so administrators can go from managing dozens of physical machines to hundreds or thousands of servers, and creating middleware that can make clouds aware of the client devices they are interacting with and optimizing the delivery of applications based on the processing, video, and network capacity available to the client as well as its battery life, should it be a device not plugged into a wall.
All of this might seem pretty remote from a cloudy data center, but Waxman says that Intel is not losing focus. Because servers account for 50 percent of the total cost of ownership of a typical Internet data center over three years, and power consumption is another 23 percent, there is a lot that Intel can do to make it less expensive to do server computing and therefore leave more money for companies to acquire more iron. Labor accounts for 13 per cent of TCO, says Waxman, with networking representing another 6 per cent, facilities 5 per cent, and other items 3 per cent. You can bet Paul Otellini's last dollar that Intel is going to do all that it can to reduce those power, labor, facilities, and networking costs so companies have plenty of dough left over to splurge on servers.
The assumption is that the demand for servers is nearly perfectly elastic, provided there is enough power and space to keep them fed and housed. I have my doubts about this, but I seem to be in the minority. And I will concede that it may take much more server capacity to chew through data in many different ways than it does to generate and house it.
In any event, Waxman explained that Intel would continue to develop key technologies like ever-more efficient Xeon and Atom processors, solid state disks, and 10 Gigabit Ethernet adapters that were optimized for cloudy workloads as well as integrated virtualization-supporting circuits on those chips and adapters. Other key technologies that Intel is integrating with cloudy tools include its Node Manager and Data Center Manager tools for controlling servers and racks of servers. The Trusted Execution Technology (TXT) is going to play a role in the securing of clouds, says Waxman, and so are the AES encryption circuits.
Both made their debut in the "Westmere-EP" Xeon 5600 processors last year, and will eventually be cascaded across the Xeon line and presumably be added to Atom chips if hyperscale data centers start demanding low-powered boxes like the SeaMicro SM10000-64, which packs 256 of the dual-core, 64-bit Atom N570 processors into a 10U chassis, complete with networking and load balancing and with 4 GB of memory per processor.
TXT and AES are vital, says Waxman, to creating trusted computing pools and to allow for secure, encrypted migration of virtual servers from pool to pool. Now Intel has to get partners in gear and using these technologies higher up in the cloud stack. ®