Big Blue dons oven mitts for ARM wafer bake
Samsung chip pact too
IBM Microelectronics – the chip designing and wafer baking division of the IT giant – has inked a deal with ARM Holdings to help make sure those who license ARM chips have the processes and fabs to make them.
ARM Holdings has been collaborating with IBM Microelectronics since 2008, and it has used IBM's 32 nanometer and 28 nanometer process nodes to bake 11 test chips to give to licenses of the Cortex family of ARM chips. More recently, ARM has used IBM's 32 nanometer high-K metal gate processes to etch a complete dual-core Cortex-A9 chip. ARM is using IBM's chip technologies to help its own licensees make better system-on-a-chip (SoC) designs, which are used in smartphones, tablets, and other mobile computing devices.
Neither company said anything about the potential of creating future ARM designs for servers, but if you wanted to partner with a fab to build server chips that will go up against Intel, IBM is one of the obvious choices. The embedded DRAM for on-chip cache – which was first deployed on IBM's Power7 and z11 chips and which is implemented in IBM's 32 nanometer copper processes (called Cu-32) – was demonstrated on ARM SoC designs last November. eDRAM takes about 60 per cent less space on a chip than static RAM, and it consumes about 90 per cent less power. It is also a little bit slower, so you need more and you need to weave it close to the cores, as IBM has done with its own chips.
The collaboration agreement that ARM and IBM have extended today calls for the two companies to cooperate on chip manufacturing processes ranging from 20 nanometers down to 14 nanometers.
In a separate announcement, IBM and Samsung Electronics, a maker of ARM processors for netbooks and other devices, have extended their own joint development agreement. The two companies will be working together on basic chip research at the Albany Nanotech Complex in the New York state capitol city. The agreement covers research in semiconductor materials, wafer making processes, and other technologies relating to 20 nanometer and smaller process nodes.
IBM, Samsung, and GlobalFoundries are expected to release more information this week about their "common platform" chip making processes. Back in June, these three partners and Synopsys delivered processes for 32 nanometer and 28 nanometer high-K metal gate processes. STMicroelectronics also last year said it would be picking up the team's 28 nanometer bulk CMOS and high-k metal gate processes for its own fabs.
Synopsys sells a chip design platform called Lynx and an intellectual property integration system called DesignWare IP that will allow those who license ARM IP to implement those licenses and tweak the designs to suit their needs while staying within the chip making processes outlined by IBM, Samsung, and GlobalFoundries. ®
ARM at 28NM?
The current ARM chip in my Samsung Epic is a Hummingbird at 45nm. It has the Imagination Technologies SGX540 GPU (with GPGPU capabilities) with four graphical pipelines. It runs Unreal Engine 3, a full-scale immersive 3D engine and the video in games like Dungeon Defenders is not to be believed on a phone. It's only a few months old.
28NM brings the same experience at less than half the watts, or photorealistic gaming on a tablet with six times the cores. Think Crysis - on a tablet or phone, all day on a long flight without recharging. Three days of HD video on a slate before the battery runs out, or Citrix if you prefer that.
If they can do this in mobile, they can do photorealistic ray-tracing rather than compositing in high-power desktops.
The third world? They've been wanting to join us online and add their value to the Net, but they don't have Watts. This would do it for them.
And they're embedding this in TV's? Vizio sounds like they're playing that game. A whole bunch of stuff is about to change. This is big. Huge. Holy Cow, the mobile revolution is upon us. The times, they are a-changing.
"video decoding (Xilinx Spartan ...)
I'm familiar enough with Spartan (and Virtex too) thank you. But back to video decoding.
If the processor is fast enough, you do the decode in software, regardless.
If the processor is not fast enough and the volume is worthwhile you put the decode in an ASIC, ideally in the SoC. If the volume is not worthwhile for ASIC/SoC, or the ASIC/SoC is not flexible enough, then it's FPGA time, and that applies whether the application is video decoding or any other readily hardware-acceleratable application.
FPGAs are inevitably in a niche in the middle, squeezed every time processors get faster and every time ASIC/SoCs get cheaper. FPGAs are amazing technology, amazing power for the price, and there are some places where they are an obvious fit. But not, I submit, in mass market consumer electronics manufacture. Prototyping or moderate volume manufacture where the one off costs of going ASIC are not so easily recoverable is a different thing.
Just my 2c.
have you seen the price of a truly worthwhile truly fast FPGA?
Unless your application happens to fit conveniently on a (current) low end FPGA, we're talking hundreds of pounds for a decent sized decent speed FPGA before adding any bits needed to integrate it in your system (PCB, programming tools, whatever). If your app is a good fit for FPGA, FPGAs are great, otherwise today's processors are generally so fast that FPGAs simply aren't cost effective. Looks that way from here anyway.