Original URL: https://www.theregister.com/2009/09/14/bechtolsheim_hpc_wall_street/

Bechtolsheim: The server is not the network

Whatever Cisco says

By Timothy Prickett Morgan

Posted in Channel, 14th September 2009 06:44 GMT

HPC on Wall Street Andy Bechtolsheim knows a thing or two about servers, storage, and networking. He co-founded workstation and server maker Sun Microsystems as well as two networking companies: one that he sold to Cisco and became the basis of its Gigabit Ethernet biz, and another that he recently started and runs while working one day a week at Sun.

And when he gives one of the keynote addresses at today's HPC on Wall Street event in New York, one of the themes will be that there is lots of room for innovation among stand-alone network equipment providers.

Prior to the event, Bechtolsheim took some time to yack on the phone with El Reg about his company, Arista Networks, which is carving out a niche for itself as a supplier of 10 Gigabit Ethernet products that have the low latency and low cost that supercomputer and other high-performance computing shops - particularly the financial services firms running trading and market data systems where every microsecond counts.

Bechtolsheim has a longer and deeper history in networking than he does in servers at this point, and he's well aware of how the two play off each other in various computing environments. He was working as a PhD student on a project to integrate networking interfaces with microprocessors while at Stanford University when he was tapped by Sun's other co-founders, Scott McNealy and Vinod Khosla, to be the upstart's first chief technology officer.

He stuck around Sun as CTO until 1995, when he left to start a company called Granite Systems, which made Gigabit Ethernet switches and which was flipped a little more than a year later when Bechtolsheim sold it to Cisco Systems. The Granite Systems products eventually evolved into the Catalyst 4000 series of switches at Cisco, and Bechtolsheim was the general manager of this product line at Cisco.

In 2001, Bechtolsheim caught the entrepreneurial bug again, and he saw the InfiniBand switched fabric as the next place to make some money and have a technological impact. And so he founded another startup called Kealia, which created the "Galaxy" line of Opteron-based blade servers, the "Magnum" monster InfiniBand switch, and the "Thumper" X4500 storage servers that came to market individually after Sun bought Kealia in early 2004 and made Bechtolsheim CTO of its server biz. The original vision that Kealia had, of course, was for an integrated blade and storage platform with an InfiniBand backbone, something Sun is selling as the Constellation System to HPC and media streaming customers.

While Sun was busily peddling the Constellation System and open sourcing its entire software stack, including Solaris and Java, Bechtolsheim caught the entrepreneurial bug yet again, and this time was smitten by 10 Gigabit Ethernet. As El Reg reported last October when Arista came out of startup mode, while Bechtolsheim was still at Sun - and with its permission - he started up a 10GE switching company originally called Arastra. Last fall, the company changed its name to Arista Networks just as he hired Jayshree Ullal, formerly senior vice president of the datacenter, switching, and services group at Cisco, to be president and chief executive officer with Bechtolsheim reprising his role as CTO and founder.

10GE goes mainstream

What has Bechtolsheim fired up about 10GE is that it is starting to go mainstream. Even with the generic networking business expected to see an annual revenue decline on the order of 20 per cent in 2009, according to Bechtolsheim, the number of 10GE ports attached to servers are expected to grow from about 400,000 in 2008 (against new server shipments of around 8 million globally). Bechtolsheim predicts that the number will more than double each year over the three years so that by 2011, over 4 million 10GE server ports will ship in that year and, if you do the math, with about 7.5 million 10GE server ports installed by 2011. About 2 per cent of all servers in the installed base had 10GE ports in 2008, which will rise to about 5 per cent this year, hit maybe 10 per cent in 2010 and to about 25 per cent or more in 2011.

"It is easier to forecast 10GE ports than revenues," says Bechtolsheim. "This is one of the few areas in IT with predictable growth."

Part of that reason is that starting this year, some server motherboards will come with integrated 10GE network interface cards, and that will change the economics of 10GE networking, much as motherboard integration did for 10 Mbit, 100 Mbit, and Gigabit Ethernet networking over the past decade. And this has to do with money as much as it does with integration. Right now, the server side of a Gigabit Ethernet switch is essentially free, and it costs maybe $150 per port, on average, for a Gigabit Ethernet switch. With the low prices that Bechtolsheim is bragging that he can deliver with Arista's switches, he can get a 10GE switch into the field for around $500 per port.

But the 10GE adapter on the server side also costs around $500 right now, which makes a 10GE port cost $1,000 a pop when you look at both the switch and the server side. This is a premium some customers - like supercomputing labs, government agencies, hyperscale Web sites, and financial services firms - will pay because of the low latency. But Bechtolsheim says that 10GE will not become a no-brainer for the entire IT industry until 10GE NICs are integrated on servers and the per-port costs drop to around $250 including both the server and the switch.

"At that point, the gap is so small that companies will switch to 10 Gigabit Ethernet even if the extra performance and low latency is not needed," says Bechtolsheim.

And Arista has every intention of getting its share of that 10GE market. The company started shipping its first switch - a 1U, rack-mounted 10GE switch called the 7100 Series - almost a year ago. It offers 24 or 48 ports in a variety of configurations that offer anywhere from 600 nanoseconds to 2.9 microseconds of latency and from 480 to 800 Gb/sec of throughput at layers 2 and 3 of the network. (The higher bandwidth boxes have more latency, since there is no such thing as a free lunch).

Bechtolsheim says the 7100 Series is really a 1U, dual-core x64 server with a 10GE networking ASIC and ports installed. Just like Bechtolsheim caught the x64 bug when he started Kealia, Arista is not interested in spending tens of millions of dollars developing its own silicon for handling 10GE networking functions. Rather, Arista has tapped Fulkcrum Microsystems for its 10GE chip, which is put inside of Arista's 1U box, which has a direct link to the x64 cores.

This, says Bechtolsheim, is a key differentiator for Arista's switches. The 7100 Series switch has an operating system called Extensible Operating System (EOS) and a virtualized implementation called vEOS that runs as a virtual appliance and that, like Cisco's Nexus 1000v virtual switch, integrates with VMware's vSphere hypervisor and virtual switch architecture. In this case, vEOS can run on the 7100 Series' x64 server and its hardened Linux operating system, not on an external blade.

Bechtolsheim also says that EOS and vEOS implement a stateless architecture that allows the switch to restart itself when it crashes exactly at the point where it crashes and that also has the added bonus of allowing EOS to be patched while the switch is running and without resorting to needing redundant controllers that are patched in succession, as Cisco's high-end switches require.

By adding an x64 processor to the switch, Arista expects to be able to quickly add new software functions to the switch without requiring changes to the hardware, something that HPC customers are looking for. And Arista fully expects that third parties will create add-on software for its switches and is encouraging this.

Looking ahead, Bechtolsheim says that Arista will roll out a full line of modular switches and will put machines in the field that scale to higher port counts and lower latencies than any other modular switch on the market. "We will be competitive on all fronts," Bechtolsheim boasts.

Arista has over 100 employees today, including contract engineers, and it has over 150 customers worldwide, with about 80 per cent of them in the United States. About a third of its customers come from the financial services industry, and the strong uptake of its 7100 Series switches among banks and brokerages is one of the main reasons why Arista is opening an office in London to develop products and sell and support them in the City. (Arista has partnered with systems integrator CTC of Japan to take on the financial and other HPC sectors in the Asia/Pacific market).

Unified computing?

If you are thinking that Arista might be tempted to jump into the server racket, forget it. Yes, Bechtolsheim has a background in servers at Sun. There's his efforts to create a converged server and storage platform at Kealia that became the Constellation System. And Cisco is ramping up its "California" Unified Computing System. But Andy's not going there.

"Arista is not a server company," Bechtolsheim said with a certain amount of finality. "We want to partner with server companies."

This, of course, begs the question of why Cisco would be interested in jumping into the server business itself, at least partially alienating its server partners by offering a converged server and storage networking platform, putting Fibre Channel over Ethernet and converged network adapters inside homegrown server blades and racks. Cisco has covered its bases by offering free-standing Unified Communications products that can work with anyone's servers, and it seems intent on maintaining the 70 per cent margins it has enjoyed historically with its networking products.

The company no doubt believes that it can offer a converged server-storage-network platform, with integrated management and virtualization, and still maintain margins, or it would not have launched the California boxes. Plenty of people are skeptical, most especially the x64 server markers who know that a 20 per cent margin is about all you can get these days out of volume boxes, with blades doing a little better because of account control and integration benefits.

"People are shifting IT spending to where it is most urgent," explains Bechtolsheim, adding that this is one reason why server spending is down this year. "The other effect is that servers are getting cheaper. We may be seeing the end of the utility function. In the past, when servers got more powerful, companies used to still buy more. Now, as servers get more powerful and less expensive, companies want to spend less. This is a very dramatic downshifting of server size and server price."

By the way, it is that 70 per cent margin that Cisco loves to command that has allowed other players, like Juniper Networks, Blade Network Technologies, Arista Networks, Voltaire, Brocade Communications, and a slew of other firms to jump into the market, find niches, and get their slice of the pie. The lack of margin in the server racket is what has compelled consolidation.

While Bechtolsheim has no doubt that FCoE and converged networking for servers and storage like Cisco is pitching will happen over the long run, he is skeptical about what Cisco is trying to do.

And while he would not admit it, part of that has to be Sun's experience with the Constellation System and its integrated InfiniBand switched fabric for servers and storage. Given the bandwidth and scalability of the servers and storage, you would think this would be the preferred box for all kinds of workloads. Not that the Constellations haven't landed some big deals and help keep Sun in the game. But the silos where servers and storage sit are still real, and you have to cope with that.

"If you ask a server vendor, they will say servers are servers and networks are networks, let's not get confused," Bechtolsheim says with a laugh. "The network sale isn't really about running applications, but about network bandwidth and fixing it if it is broken. Combining these functions is not as obvious as Cisco's California makes it out to be. We cannot change something just by proclaiming that it is different. Storage managers are the most risk adverse people of anyone in the data center. And FCoE is in the early phase, and we are not aware of any production customers yet. Over time, customers will adopt it, but we are not expecting it to take the market by storm."

Bechtolsheim adds that the Nexus unified communications switches - which put Fibre Channel storage protocols on a 10GE backbone so servers link to storage on the same box as they link to each other and the outside world - are significantly more expensive than buying 10GE switches and FC switches and running them side by side. (Cisco would no doubt argue with this statement, and has. And will continue to do so). And it is academic anyway, since Arista's switches will be able to support FCoE and Converged Enhanced Ethernet (CEE) protocols as soon as the standards settle down a bit and customers start asking for them.

It may be that Cisco learns the hard way a lesson that Bechtolsheim learned with the Kealia/Constellation products. He says that in the HPC market, the interconnection fabric for the server nodes was typically sold as part of a cluster, and that InfiniBand has never been a separate distinct sale. The Ethernet connectivity market has always been completely distinct, and corporate data centers are still thinking about servers and networks as separate domains. Even when they use blade servers with switches integrated into the chassis.

There is, of course, a distinct possibility that Cisco is, in fact, correct about the convergence of servers, storage, and networks, that companies will want to have one vendor selling them an integrated system. In that case, Arista will either have to sell servers, closely ally itself with server makers, or get eaten by a server maker. It will be interesting to see what happens. ®