This article is more than 1 year old

Intel wants to reconstruct whole data centers with its chips and pipes

The rack is the new server, and the data center is the new rack

Analysis If Intel really wanted to, it could build your entire data center infrastructure with just about all of the components necessary excepting disk drives, main memory, and operating systems for the servers, storage, and switches. And with a stretch, its Wind River Linux could probably cover that last bit.

But that is not the Chipzilla plan for the glass house. Rather, Intel wants to sell an increasingly diverse set of components to those who assemble gear and thereby double up its revenues and profits from the data center.

That, in a nutshell, is Intel's plan to re-architect the data center, divulged last week in San Francisco by the top brass in its Data Center and Connected Systems Group. The plan set out by the group's previous general manager, Kirk Skaugen, was to double the revenues for the part of Intel that sells chips, chipsets, motherboards, and other components aimed at servers, storage arrays, and networking gear to $20bn by 2015.

It was clear that networking and storage would play heavily in those schemes. And, as it turns out, so does a fundamental redesign of how computing, storage, and networking elements are organized.

At one level, Intel is moving to integrate more and more components onto its server processors to create so-called system-on-chips (SoCs), with a level of component integration that is more typical of that found in laptops, tablets, and smartphones. Such integration can reduce power consumption.

Getting a signal off a chip and into another one and back is very costly in terms of energy use, but passing a signal across a single chip or within a package with multiple chips takes a lot less juice. The integration that comes with SoCs also gives Intel (and other suppliers of such components, and there are many, particularly those peddling units based on ARM processor cores) an easier target to land software on because the component matrix is greatly reduced compared to putting individual components on a printed circuit board and linking them.

Ironically, Intel is exploding the server; breaking it into modularized components for compute/memory, networking and other types of I/O, and storage, essentially converting a group of servers in a rack into a pool of compute, storage, and networking.

The rack, in this new vision, becomes the new server, and the data center is the new rack. The new data center will be . . . multiregional and invisibly so, we presume. (Or possibly SkyNet or the Cylons. It's hard to say.)

Intel data center chief Diane Bryant

Intel data center chief Diane Bryant

"We are in the middle of a major transformation in the way that IT is fundamentally used," Diane Bryant, general manager of Intel's Data Center and Connected Systems Group, said during her keynote address, which outlined the broader technology trends that Intel sees coming and how it will react to them with different technologies.

In the early years of data centers – when they were called glass houses because of the reverent manner in which mainframes and minicomputers were treated – the idea was all about automating company back-end processes that were handled by people up to that point using these new-fangled using systems and their software.

That transformation did not begin in the 1990s, as Bryant suggested in her presentation, but rather back in the 1960s with various mainframes at large enterprises and accelerated to a broader set of smaller users through the minicomputer and Unix revolutions in the 1980s and 1990s.

As the 1990s were coming to a close, integrated enterprise resource planning software, which handled not only back office operations such as accounting, payroll, and such, but also supply chain, warehouse, wholesale distribution, and customer management, became the norm at many companies, which up until then had done a lot of homegrown application coding. This shift to ERP apps is the main reason why the enterprise software market is roughly twice as large as the enterprise hardware market.

By the 2000s, explained Bryant, with the commercialization of the Internet, the big shift was to reduce costs in the enterprise by getting everyone connected. That cheap connectivity and the ubiquity of the Internet protocols allowed for public networks and private networks to link seamlessly, so businesses could link to their partners and their customers, doing things like online sales and marketing and driving out the cost of doing business.

Here in the 2010s, Bryant says we are at the beginning of a "human-centric" era of computing, which is focused on rapid service delivery through cloud-style computing and myriad devices. This is not screen-scraping mainframe and Unix apps to run on a Windows desktop.

"IT is no longer supporting the business. IT is actually the business. IT is being used to deliver the business results," Bryant proclaimed.

"This is the new virtuous cycle of computing, and to be honest, this is what gives us tremendous confidence in our data center business. As more and more devices come online – there will be 14 billion devices by 2016, with 5 billion being consumer devices and 9 billion being machine-to-machine devices – they require a connection back to the data center."

Intel reckons that for every 600 phones that are turned on, a whole server's worth of capacity has to be fired up (and spread out across the Internet) to keep it fed. Every 120 tablets requires another server's worth of capacity, and so does every 20 digital signs and every 12 surveillance cameras. Storage and network capacity will have to be added to those servers, of course.

The other thing that has Intel excited about its data center business is that the penetration of key new technologies is very low, just like the use of mainframes and minicomputers was exotic in the 1960s and 1970s.

Bryant said that based on its own surveys of enterprise customers during the first quarter, only 6 per cent of companies were using big data analytics tools to make decisions, but 85 per cent of them knew they have to do it.

On the cloud front, an IDC survey from last year shows only 9 per cent of enterprise workloads are on the public cloud, and research from Intersect360 from 2011 shows that only 12 per cent of manufacturing companies in the United States use server clusters and distributed design and simulation software to create, test, and improve their products. And that's with 82 per cent of those manufacturers saying they know they could do better if they installed clusters and software.

The trick for the data center is to allow systems to scale and to do so at low cost, and that is what many of the technologies that Bryant and her team previewed were all about.

As an example of how Intel has helped spur such transformations before, Bryant said that when Intel entered the supercomputer racket in 1997, a year after formally entering the server business, it was dominated by various expensive vector and scalar systems with proprietary memory or system interconnects.

And since that time, as the x86 processor has come to dominate the Top 500 supercomputing systems in the world, supercomputers have seen a 1,500X increase in performance with only a 4X increase in power consumption; there has been a 100X reduction in the cost of the processing of one floating point operation over that time.

A decade ago, processors largely stopped getting faster, although the math units are getting wider all the time and the number of cores in a socket and systems in a cluster is rising steadily.

Intel is addressing more workloads in the data center and at the network edge

Intel is addressing more workloads in the data center and at the network edge

The increasing diversity of workloads that Intel is chasing is one lever for the growth of the data center business at Intel, and that is separate from the virtuous cycle between device proliferation and online application delivery and machine-to-machine communication that will drive capacity demands on data centers.

But there are issues that have to be contended with, and the main one is that IT infrastructure is still too brittle and still needs too much human interaction. Neither of these is good for an operation that is running at hyperscale.

"We can solve these problems and we can do it by re-architecting the data center, and that is precisely our plan," said Bryant. "We are going to move the infrastructure from static to dynamic, we are going to do it with software, and we are going to do it for servers, networking, and storage."

On the server front, what Intel wants to do is break the compute, memory, storage, and I/O elements in a system free from each other and they reaggregate them on the fly into pools of capacity that applications can demand slices of as needed.

In some cases, applications will run on bare metal and take whole chunks of CPU and memory, and in other cases they will run atop server virtualization hypervisors. (And, if Intel ever gets around to buying ScaleMP, applications could span multiple physical server nodes and look like a single system image to an operating system.)

The pooling of memory and compute is not as separated as this chart implies - yet

The pooling of memory and compute is not as separated as this chart implies - yet

At the moment, an application is constrained to what is inside the box, explained Bryant, but to be fair plenty of applications have long since been designed to scale horizontally, and often in the Web, application, and database tiers independently from each other.

What can be said about servers, then, is that a physical server's configuration is locked in stone, and often if you want more main memory or I/O, you have to upgrade to a new processor even if you don't need extra processing capacity.

This might be fine for a company that buys a server every three, four, or five years to tuck under a desk or hide in a closet, but it is not going to work for a hyperscale data center operator with tens to hundreds of thousands of servers in a data center.

"This is a move towards a true software-defined server," said Bryant.

(And we are now officially getting sick of everyone talking about a "software-defined" anything, even if the description is true.)

The same kind of administration and capacity planning issues that have plagued servers for decades – Bryant cites IDC data that says average utilization is still under 50 per cent after many years of virtualization projects – are just as bad on storage.

For one thing, there are many kinds of storage in the data center – block, file, and object – with some applications wanting their storage locally on the servers where they do the computing (think Hadoop) and others not caring so much if the data is on a storage area network or network file system (think database applications).

The nature of the computing job should drive the choice of storage system type, and so should how hot, warm, or cold that data is. And the issues are more complex than just setting up a SAN and plugging everything into it.

Forget the SAN, storage is going to be a mix of local and central

Forget the SAN, storage is going to be a mix of local and central

The big problem, explained Bryant, is that application developers never want storage to be a bottleneck for their application. And so what do they do? They overprovision like crazy.

"There is a huge exaggeration between what is requested and what is needed," Bryant said, referring to her own years as Intel's own CIO. "We need to get to a place where the application requests the storage it needs, whether it is object, file, or block."

Intel already trotted out its conceptual physical and virtual switches at the Open Compute Summit in April, and Bryant talked about the key technologies that go into these boxes.

The news last week at the Re-architecting the Data Center event was that Intel was itself putting software-defined networking (SDN) switches and management tools into pilot in its own data centers. It is not clear if Intel is using commercial switches based on its own "Seacliff Trail" physical switch and "Sunrise Trail" virtual switch or if it has partnered with one of its customers that is using Intel's chippery to make the switches.

Intel wants dice and slice and automate switches as the industry has done with servers

Intel wants dice and slice and automate switches as the industry has done with servers

Two years ago, Intel set its sights set on data center networking, and acquired the Fulcrum Microsystems Ethernet chip business and the QLogic InfiniBand chip, switch, and adapter business to be a player in those two spaces.

To a lesser degree, Chipzilla bought the "Aries" Dragonfly interconnect from Cray to have something to peddle up in the supercomputing stratosphere where machines scale to hundreds of racks. But Aries is not immediately applicable to the data center, even if it does present some interesting possibilities.

Intel also has aspirations in networking outside of the data center, said Bryant. Specifically, Intel wants to get its Atom and Xeon processors into base stations on the edge of telco networks so those base stations can run applications and cache data, among other things.

"For years, we have talked about telco as being the dumb pipe, and this fundamentally eliminates that concern at the wireless edge," said Bryant. "This just changes the game."

More about

TIP US OFF

Send us news


Other stories you might like