Nvidia's 2015 Tegra ARM chip promises '100X' speed-up
First 'Logan' marries CUDA, then 'Parker' moves to 64-bit Denver
GTC 2013 Nvidia has fleshed out details about its next Tegra mobile processor, code-named Logan, and revealed that its long-running 64-bit ARM "Project Denver" effort will yield its first fruit from Logan's follow-on, code-named Parker.
"Logan has something we've been dying to bring to the world for so long," Nvidia cofounder, president, and CEO Jen-Hsun Huang told his keynote audience at the GPU Technology Conference in San José, California, on Tuesday. "Logan incorporates for the first time our most-advanced GPU. It's the world's first mobile processor with CUDA."
CUDA, for you Reg readers who haven't been following the meteoric rise in GPU computing efforts over the past half-decade or so, is Nvidia's parallel-computing platform and programming model, which allows developers to employ the GPU's powers for more than mere graphics and image processing.
Before Logan, CUDA programmers needed to apply their skills to Nvidia's discrete GPUs such as their current Kepler line, for example. When Logan appears in mobile devices – and, for that matter, when it makes its way into ARM-based servers – they'll be able to extend their chops to low-power Tegra processors.
"You're not the only one who's been holding your breath, I'll tell you," Huang told his audience of CUDA developers.
Logan will include a Kepler-class GPU, and will be CUDA 5 and OpenGL 4.3 compliant "out of the box," said Huang. "It does everything a modern computer ought to do," he said, and promised that Logan will make its debut "this year," and that it will be in full production in early 2014.
Logan was first discussed in public at the Mobile World Conference in 2011, and the 64-bit Project Denver ARM effort was also officially announced that year, at the Consumer Electronics Show in January. At that time, Huang described it as one of the most important announcements that Nvidia has made in its history.
Parker will have 100 times the performance of the Tegra 2 – whatever 'performance' actually means, that is
That results of that project will first hit the market in Logan's successor, Parker. "I know you've been waiting," Huang said. "So have I." Although he mentioned no specific date for Parker's release, its icon stood directly above 2015 in the roadmap slide displayed during Huang's keynote – but as anyone who has followed processor developments over the years knows, roadmap slides can sometimes be, shall we say, aspirational.
In addition to 64-bit support, Parker – likely Tegra 6 – will be baked in a 3D finFET process that should lower its power requirements and raise its performance, and will incorporate Nvidia's next-generation Maxwell GPU.
That GPU, due to be released next year, will be distingushed by having a unified virtual memory architecture that will allow it to share memory with Parker's compute cores. "All memory will be visible to all the processors," Huang told the assembled developers. "That'll just make it a lot easier for you to program."
Between the release of the Tegra 2* in 2011 and Parker's appearance, Huang said that Tegra's performance will have increased 100 times – although exactly what performance metric he was referring to, he didn't say.
Referring to that 100X improvement, Huang compared the development of GPUs versus CPUs. "Now, Moore's law would suggest about eight," he said. "Well, that's a perfect example of disruptive technology. It happened back in the good old days when the PC industry came along – the rate of innovation in the industry was really quite staggering, quite staggering."
And there's more to come beyond Parker, he promised. "Because parallel computing is still in its nascent stage," he said, "there's so many new ideas that we can still incorporate architecturally to improve its performance. There's still a lot of learning ahead of us." ®
Being a sharp-eyed, perspicacious Reg reader, you no doubt noticed that Huang's keynote slide and Tegra discussion began with the Tegra 2. "Our first Tegra we won't say anything about because it didn't turn out that well," he explained. "We were just learning."
Sponsored: Hyper-scale data management