HPC

This article is more than 1 year old

Intel HPC plans need exascale I/O and storage

Balancing HPC compute, networking and storage

ISC'11 Intel set itself an exascale computing target at ISC Hamburg, with radical implications for HPC networking and storage.

Kirk Skaugen, VP of Intel's architecture group, said: "We're going to build computers 100 times more capable than they are today… moving to exaflop computing."

By 2018, he said, Intel will provide 125X the performance of today's processors. Moore's Law will provide 25X of that with the remaining 5X coming from the MIC technology, and "We'll need more than 5X to give us a net 5X."

According to Skaugen, Exascale computing will enable things like real time delivery of CT scans on the hospital operating table, and much better forecasting of where hurricanes will make their landfall in the Gulf of Mexico. He is convinced that Intel is on track, with its 22nm, 3-dimensional tri-gate transistors, constant process size reductions and MIC architecture to bring us exascale computing.

Today's HPC systems have entered the petaflop era and we see quad data rate (QDR) InfiniBand or 10GbitE networks connecting the hundreds and thousands of HPC processors to arrays that have entered the petabyte level.

Skaugen committed Intel to providing 100 times the performance of today's computers with just twice the electrical power draw and using today's software programming model. Intel is developing new optimisation tools to work on compiled code and optimise it for the coming MIC processors. Skaugen claimed: "One programming model [will] democratise usage and avoid costly detours."

A leap into the exaflop computing level will require similar leaps in networking I/O performance, and storage capacity and latency to keep the thousands of processor cores busy, and provide a balanced HP system. It seems clear that today's QDR InfiniBand and petascale spinning disk arrays will be insufficient for exascale computing.

InfiniBand will progress from QDR (40Gbit/s) through FDR (56Gbit/s) to EDR (80-100Gbit/s) to deliver the networking bandwidth needed. Ethernet will need to pass through 40GBit/s to 100Gbit/s to match this.

Skaugen said storage latency and IOPS will become key. Dr Eng Lim Goh, SGI's chief technology officer, said we will probably move to solid state drives (SSD) for the first tier of HPC storage with disk a tier behind. SGI had already built a 1 million IOPS system with Intel servers and flash storage, but not PCIe flash.

Skaugen, acknowledging Intel flash partner Micron's PCIe flash product, said: "We could see PCIe flash from Intel in the future."

This level of compute power in HPC applications could cement the use of InfiniBand and banish spinning disk from the front line of HPC storage.

Almost as a aside, Intel said it will have an 80 per cent share of high-volume storage array controllers by the end of this year. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like