Feeds

Forget terabit Ethernet, the next step is 400 gig – if we can afford the R&D

Don't worry, Asian OEMs will pop for it, as long as they get all the jobs

3 Big data security analytics techniques

SC13 – Updated The current top-end Ethernet standard may be 100 gigabits per second, but don't expect the next step up to be one terabit per second. The days of Ethernet speeds improving by an order of magnitude are gone. Why? Because nobody wants to pay for the necessary research and development – nobody in the US or the EU, that is.

"My view is that 10X Ethernet increases are not going to happen anymore," said Chris Cole, engineering director at Finisar, but representing just himself during a discussion session at the SC13 supercomputing conference in Denver, Colorado, on Thursday.

"One of the big reason that you had 10X increases was that the telecom guys paid for the technology development," Cole said. "Those were the days when AT&T and the carriers in other countries had big checkbooks and they could fund things like Bell Labs. So they paid for some very advanced technology which then allowed Ethernet catch on. Those days are over."

The new advancement rate will likely be 4X, not only for Ethernet, but also for the core optical transport rates set by the ITU-T. Some industry types are still holding out for a 10X jump to terabit Ethernet, but odds are heavily stacked that the next big jump in Ethernet speeds will be to 400Gbs – aka 400 gigabit Ethernet, or 400GbE.

Just last week, in fact, the IEEE's "400 Gb/s Ethernet Study Group" approved objectives [PDF] for four different link distances of 400GbE, beginning the long and complex process that will eventually result in a full, certified specification, probably around 2017.

The IEEE project number for the 400GbE effort is – no tittering, please – 802.3bs.

Cole doesn't believe that terabit Ethernet is in the cards. "My crystal ball says that the next Ethernet rate after 400G will be 1.6 terabits." Arguing against the possibility of a jump from 400GbE to 1TbE, he doesn't see the economics working in 1TbE's favor.

"That's only a 2.5X increase," he said, "but everyone agrees that to get to a terabit will require completely new technologies, and 2.5X just doesn't feel like a bit enough step to justify a huge R&D investment."

There's that almighty dollar again.

Nobody wants to pony up the hefty amount of cash required for the move to 1.6TbE. As Cole noted, the telecoms no longer want to pay for R&D – they want the suppliers to front the money.

"Of course the supplier base has no money," he said, "so we keep searching for the guys with money in their pocket." He then told his audience of HPC pros that "at a meeting of the IEEE last week, there was no one from the HPC community there, and so there were a number of suggestions that maybe HPC should pay for the next generation of technology – at least in some quarters you're perceived as being wealthy."

Interestingly – and somewhat worryingly – Cole told The Reg that the R&D funding problem is most severe in the US and EU. "In Asia you see the large system OEMs funding R&D extensively. In the US the suppliers actually want to fund it, they just don't have the money."

From his point of view, the market is still in bad enough shape that customers have sufficient leverage over suppliers to depress prices. As a result, margins are squeezed to such a degree that at the end of the day there's not enough cash left over for suppliers to fund R&D.

China, on the other hand, is shoveling money into R&D, he said. "It's good news, bad news. The good news is there is some of the advanced work being done, and the bad news is that the jobs will probably shift over time to Asia."

By then, those Asian researchers will likely be able to transfer their data sets back to their fellows in the US and EU over 400GbE links – if, of course, there are any Ethernet engineers left outside Asia. ®

Update

In the original version of this article, Chris Cole's statements were inadvertently attributed to a different member of the discussion panel.

SANS - Survey on application security programs

More from The Register

next story
This time it's 'Personal': new Office 365 sub covers just two devices
Redmond also brings Office into Google's back yard
Dropbox defends fantastically badly timed Condoleezza Rice appointment
'Nothing is going to change with Dr. Rice's appointment,' file sharer promises
Bored with trading oil and gold? Why not flog some CLOUD servers?
Chicago Mercantile Exchange plans cloud spot exchange
Just what could be inside Dropbox's new 'Home For Life'?
Biz apps, messaging, photos, email, more storage – sorry, did you think there would be cake?
IT bods: How long does it take YOU to train up on new tech?
I'll leave my arrays to do the hard work, if you don't mind
Amazon reveals its Google-killing 'R3' server instances
A mega-memory instance that never forgets
Cisco reps flog Whiptail's Invicta arrays against EMC and Pure
Storage reseller report reveals who's selling what
prev story

Whitepapers

Designing a defence for mobile apps
In this whitepaper learn the various considerations for defending mobile applications; from the mobile application architecture itself to the myriad testing technologies needed to properly assess mobile applications risk.
3 Big data security analytics techniques
Applying these Big Data security analytics techniques can help you make your business safer by detecting attacks early, before significant damage is done.
Five 3D headsets to be won!
We were so impressed by the Durovis Dive headset we’ve asked the company to give some away to Reg readers.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Securing web applications made simple and scalable
In this whitepaper learn how automated security testing can provide a simple and scalable way to protect your web applications.