Nvidia's Tesla P100 has 15 billion transistors, 21TFLOPS

Nvidia has revealed its Pascal Tesla P100 GPU, described as the largest FinFET chip on the market. It is in volume production today.

It features 16nm FinFETs and 15 billion transistors on a 600mm2 die, 16GB of HBM2 memory, and the NVLink interconnect. If you include the memories, the 15 billion figure balloons to 150 billion transistors.

Its performance according to Nvidia: 5.3TFLOPS using 64-bit floating-point numbers, 10.6TFLOPS using 32-bit, and 21.2TFLOPS using 16-bit.

It has 4MB of L2 cache, and a block of 14MB of shared memory. It is targeted at hyperscale data center workloads crunching deep-learning AI and HPC apps. Servers with the chips are due in Q1 2017; Nvidia's DGX-1 supercomputer-in-a-box uses it and is due out in June, and cloud providers will offer the hardware this year as an online service. ®


Biting the hand that feeds IT © 1998–2017