Feeds

AMD trumpets next-gen GPU architecture

The road to the Holodeck

Secure remote control for conventional and virtual desktops

Fusion Summit AMD has trumpeted its next-generation GPU architecture, painting the design as a radical departure that has one foot in the graphics world and the other in what AMD, Microsoft, ARM, and others dub "heterogeneous computing".

Essentially, the new architecture is a parallel-processor, throughput engine that can serve both graphics and compute tasks. For some time, AMD GPUs – formerly ATI-branded – have been based on multiple graphics engines with VILW (very long instruction word) cores. Not so AMD's next-generation parts.

Speaking at the company's Fusion Developer Summit on Thursday, AMD graphics CTO Eric Demers described the new GPU as an MIMD (multiple-instruction-stream, multiple-data-stream) architecture with a SIMD (single-instruction-stream, multiple-data-stream) vector array. "There are four wavefronts, every cycle, executing on the vector and scalar units, And these can come from four completely different applications or from the same application," he explained.

"And then there's up to 40 wavefronts living in a CU [compute unit], that any four of which can run at any cycle, so its sorta got SMT [simultaneous multi-threading] properties."

But he doesn't have a good name for it. "The reality is that it's leveraging all that goodness from all those different architectures, and to put one perfect label on it would not be fair," he said.

AMD's goal is to blur the line between the data which CPUs and GPUs are munching on. "Our plan is that ... eventually all these devices – whether they're CPUs or GPUs – are in the same unified 64-bit address space."

Although the first parts based on the new architecture should appear by the end of this year, Demers laid out a series of capabilities that AMD plans to roll out between the first new-architecture GPUs and then 'incrementally" by 2014: GPU support for C++ and other "high-level constructs", virtual address space, support for page faults, memory coherence at the L2 level and shared among the CUs and between the CPU and GPU, and the ability to save and reload the device state.

This last ability, Demers said, will make context switching "much, much easier", and although some fixed-function elements in the pipe will require some work, "fundamentally this core can support and will support context switching and preemption."

These capabilities are not limited to just discrete graphics. "I'm not talking about APU, I'm not talking about GPU, I'm talking about an IP of a core that's going to be used in all our products going forward," he said. "Over the next few years we're going to be bringing you all of this throughout all our products that have GPU cores."

Demers added that the new architecture won't require apps to be rewritten to take advantage of it. "Almost without exception, everything runs the same or faster," he said. "There are going to be cases, particularly on the compute side and more so on the graphics side where this really gives you a fourfold jump."

But he aims to provide more than speed. A lot more. "I want to create realities that you can't tell that you're not looking through a window," he said. "In fact, I'd rather that you can't tell you're not inside my reality."

AMD's next-generation graphics architecture, he contends, is one step on what he called "the road to the Holodeck." It's part of the continued progression from the fixed-function, graphics-only GPUs of the mid-1990s to the simple shaders of 2002 to 2006, and on to the introduction of parallel-core, unified shader architectures of 2007 and later.

His point in this historical review wasn't mere misty-eyed reminiscence. He was leading his audience from GPUs' graphics-only past to their increasingly compute-supportive role in what AMD envisions as the heterogeneous-computing future, in which GPUs are equal partners with CPUs and specialized cores. ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
Fujitsu CTO: We'll be 3D-printing tech execs in 15 years
Fleshy techie disses network neutrality, helmet-less motorcyclists
Space Commanders rebel as Elite:Dangerous kills offline mode
Frontier cops an epic kicking in its own forums ahead of December revival
Intel's LAME DUCK mobile chips gobbled by CASH COW
Chipzilla won't have money-losing mobe unit to kick about anymore
First in line to order a Nexus 6? AT&T has a BRICK for you
Black Screen of Death plagues early Google-mobe batch
Ford's B-Max: Fiesta-based runaround that goes THUNK
... when you close the slidey doors, that is ...
Disturbance in the force lets phones detect gestures with Wi-Fi
These are the movement detection devices you're looking for
prev story

Whitepapers

Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
How to determine if cloud backup is right for your servers
Two key factors, technical feasibility and TCO economics, that backup and IT operations managers should consider when assessing cloud backup.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?