Scaling up to exascale
The Epiphany design is meant to scale to around 1GHz with 64 cores on a die and deliver performance at around 25 milliwatts per core. This Epiphany-IV chip, which was previewed last October, is currently running at 800MHz and delivers that 50 gigaflops per watt.
"It has the best energy efficiency in the world," brags Olofsson, "even better than the GPUs." Chips came back from the fab – Not Taiwan Semiconductor Manufacturing Corp, but rather the AMD foundry spinout, GlobalFoundries – in July of this year and the processors "are yielding well."
How the Epiphany chip stacks up to FPGAs, CPUs, GPUs, and DSPs
The Epiphany chip runs plain old ANSI C and C++, adheres to IEEE floating point math, and uses OpenCL interfaces to offload parallel processing from a CPU. Nothing crazy or special. When you marry it to an x86 or ARM core, it looks like a math coprocessor. A very tiny one, at only 10 square millimeters without its package on.
Or roughly 1/30th the size of a GPU at 1/30th the floating point performance but running at 1/70th the power of the GPU (as you can see from the table above). The benefit, says Olofsson, is that you can get the programming efficiency of a DSP or CPU with better power efficiency than is provided by a GPU coprocessor.
And that is why Adapteva, which Olofsson funded out of his pocket for the first two years and then in the past two years with $2.35m in debt and Series A equity funding, thinks it can be a player in the race to exascale computing.
Adapteva's ambitious goals with the Epiphany chip
Looking ahead, Adapteva has set its sights on two different processors for the exascale timeframe in 2018. One is an entry coprocessor with 1,000 cores on a die that delivers 2 teraflops of performance in a 2 watt thermal envelope, and the other is a massive chip with 64,000 cores with 1MB of SRAM per core that can deliver 100 teraflops of floating point coprocessing at 100 watts. Both chips will deliver 1 teraflops per watt using 7 nanometer wafer baking processes, if all goes well.
Which is where we come into the picture with the Parallella community effort.
Adapteva wants to raise cash to fund development as well as to seed a community of developers for its coprocessors that will in turn feed back inputs into the hardware designs. It isn't quite open source hardware, but it is about as close as you can get.
A Feast for Epiphany
The Kickstarter program that Adapteva has started, which you can participate in here, took a bit longer to get out the door than expected because Kickstarter changed its hardware project rules just before Adapteva was prepped to launch. But it is here, and you can kick in dough for the Epiphany cause - and if you kick in enough dough, you get gear to tool around with.
The first level of support is a Parallella supporter level, where you donate $15 or more, and the funds go directly to development of future Epiphany chips. At the Parallella maker level, you fork over $99 or more and you will get a Parallella board with an Epiphany-III board, which includes a dual-core ARM Cortex-A9 processor, and a software stack to help you code up software for it.
At the Parallella pair level, which has a minimum pledge of $199, you get two of the Parallella boards so you can do compression/decompression, encoding/decoding, transmission/receiving applications without having to borrow someone else's board. The goal is to raise $750,000 in funding at the first level.
If you are eager to get your hands on a 64-core Epiphany-IV chip, and if Adapteva reaches its stretch goal of $3m in pledges, then if you pledge $199 or more at the 64-core level, you will get a single board with the Epiphany-IV.
If Adapteva doesn't reach the $3m level, you'll get two Epiphany boards with the ARM coprocessors. Pony up $499 for developer level and you will get a unique serial number on your Epiphany-III board and you get early access to the Parallella SDK. And cough up $5,000 and you are at the early access level and you get the developer stack three months ahead of everyone else.
As El Reg goes to press, Adapteva's Kickstarter project has 1,593 backers and has raised $201,445. The company has to reach its $750,000 funding goal by October 27 at 6 PM Eastern. ®
Err... you may have missed the point.
1) They're suggesting that the power consumption is a barrier to wider adoption.
2) Also, what the Reg didn't cover was the other barrier to adoption: parallel computing suffers from a lack of skilled programmers. The first computing revolution was powered by self-taught hobbyist programmers on single-processor boards. The developers believe that this has created a generation of single-processor-centric programmers without the skills for parallel work. They want to create a hobbyist scene for parallel processing and foment a skills revolution in the parallel computing sphere, which will then (hopefully) allow genuine parallel processing to become part of mainstream computing, as opposed to the minimalist OS-managed parallelism of current-gen multicore processors.
Cynics viewpoint: what we have is a bunch of clever blokes who developed a clever processor and found that the people who could use it don't want it, and those who might want it couldn't use it, so they're repositioning it as a hobbyist teaching toy.
Optimist's viewpoint: a bunch of clever blokes developed a clever processor that solves a clever problem, and finding that the market couldn't take advantage of it, they decided to try to develop the market by themselves.
It's the Transputer all over again.
Re: Intel & IBM & MS
Occam was "mathematically proveably correct" at the cost of being very static. This made it ok for fixed embedded tasks like radar processing, but very difficult for anything more general purpose, unless you built your own dynamic memory management on top of it, like our project did, or used the C compiler and libraries.
By the time they ironed out the H1 (or T-9000, or whatever the next big thing after the T-800 was) Intel x86 performance had left it in the dust, and has done until now, when the mainstream is running out of ways to improve it economically, and are being forced back into considering these old ideas.
The article is right. What is holding back progress is ultimately soflware to take advantage of all that horsepower. Put enough cheap hardware into the hands of the hobbyist masses, and we should see some interesting things come out of it.