Top CompSci boffins name the architectures we'll need in 2030

Number one: Make designing special-purpose hardware as easy as writing software

The International Symposium on Computing Architecture has revealed the five architectural challenges it thinks computer science needs to solve to meet the demands of the year 2030.

Their recommendations, distilled from the Architecture 2030 Workshop at June's ISCA in Korea and available here, draws on the contributions of speakers from several universities, the IEEE's Rebooting Computing Initiative & International Roadmap of Devices and Systems and an industry survey.

The resulting document, Arch2030: A Vision of Computer Architecture Research over the Next 15 Years, starts by saying we currently have a “specialization gap”. Computing has improved in recent decades, the authors say, because we've coasted on Moore's Law. To keep up with the demands of future workloads, “Developing hardware must become as easy, inexpensive, and agile as developing software.”

Next comes a call for “The Cloud as an Abstraction for Architecture Innovation”. Translated, this means researchers should go to town using all of cloud providers' best bits – machine-learning optimised CPUs, FPGAs, GPUs in large numbers – to create otherwise unimaginable architectures. The authors also say researchers must redouble efforts to virtualise those architectures so they can span different clouds.

3D integration in silicon, “shortening interconnects by routing in three dimensions, and facilitating the tight integration of heterogeneous manufacturing technologies” is also recommended. If it can be pulled off, we'll get “greater energy efficiency, higher bandwidth, and lower latency between system components inside the 3D structure.” Which sounds lovely.

As does a call for architectures “Closer to Physics”, a phrase used to call for devices that make use of new materials, or techniques like quantum computing, that emphasise analog processing of data instead of today's approach of forcing digital computing into manufactured tools.

Processors assembled from carbon nanotubes, the document says, “promise greater density and lower power and can also be used in 3D substrates.”

Lastly the document identifies machine learning (ML) as 2030's most in-demand workload and offers the following observation about how to deliver it:

“While the current focus is on supporting ML in the Cloud, significant opportunities exist to support ML applications in low-power devices, such as smartphones or ultralow power sensor nodes. Luckily, many ML kernels have relatively regular structures and are amenable to accuracy-resource trade-offs; hence, they lend themselves to hardware specialization, reconfiguration, and approximation techniques, opening up a significant space for architectural innovation.”

The Register looks forward to the day when we can see all this in action in Dell HPEMC's new ultra-hyper-converged meta-infrastructure running VMware's XenSphere. ®

Sponsored: Minds Mastering Machines - Call for papers now open

Biting the hand that feeds IT © 1998–2018