Here's the multi-core man coding robots, 3-D worlds and Wall Street
Kunle Olukotun gets pervasive
Radio Reg "You have to do some really good work and become famous."
That's what Stanford President John Hennessy said would be required if then associate professor Kunle Olukotun wanted to secure tenure. So, Olukotun set after that goal with some ground-breaking work in the field of multi-core processors. His research helped form the basis of Afara Websystems' multi-core chip. Sun Microsystems then acquired Afara, and the rest is history. Olukotun got his tenure.
Or maybe the rest isn't history. After all, Afara simply kicked off a wave of multi-core processors. Sun led with the Afara-based Niagara line of processors, and now every major chip company has both "regular" multi-core chips and research underway into far more radical designs.
These processors present immense challenges to software developers most accustomed to writing code for single-core chips.
In Episode 18 of Semi-Coherent Computing, I sat down with Olukotun to talk about his life, his work and his vision for computing's future.
As head of Stanford's new Pervasive Parallelism Lab, Olukotun is looking to create software tools that will make it easier for programmers to embrace multi-core chips. He and other researchers will focus on building development environments for 3-D worlds, robots and massive server-side applications. With any luck, the Stanford work - funded by the likes of IBM, Intel, AMD, Nvidia, Sun and HP - will help coders tackle chips with 100 cores or more.
Anyway, have a listen. I give you one of the fathers of multi-core chips.
Sorry, as always, for taking so long between shows. I am once again renewing my vow to post shows with more frequency. Your brutal e-mails on this point are appreciated.
Thanks for your ears! ®
"dataflow seems to me more of a programming model generally than a genuinely scalable implementation technique. I guess it takes many forms, including functional programming, Linda (javaspaces?), etc."
It is a genuinely scalable implementation. Check out this
http://java.sys-con.com/node/523054. It is a dataflow success story that was published in the JDJ.
dataflow seems to me more of a programming model generally than a genuinely scalable implementation technique. I guess it takes many forms, including functional programming, Linda (javaspaces?), etc.
None yet scale well at all levels; instruction-level to MPP level. I think they should be treated as implementations where they fit, but as programming models where they don't (and dealt with using the right tools, which afaik don't exist outside the lab).
In other words dataflow's not a magic wand, sadly.
BTW one totally weird approach that is almost never mentioned is that of Path Pascal.
It's a limited trick that but so odd and novel I'd like to note it here (Caveat: I've never used it or anything like it. It's just so inventive it bears flagging up). Goggle it.
one approach is dataflow
In an article in last month’s SD Times the challenge facing the technology industry was succinctly laid out:
“I wake up almost every day shocked that the hardware industry has bet its future that we will finally solve one of the hardest problems computer science has ever faced, which is figuring out how to make it easy to write parallel programs that run correctly,” said David Patterson, professor of computer sciences at the University of California at Berkeley.
We have seen that there are a number of approaches to trying to deal with this challenge. In our case, we have reached back to use an application architecture first implemented decades ago -- DataRush is a Java implementation of the dataflow approach.
"The technical problems were all solved long ago with the invention of dataflow programming. What remains is to educate programmers and to bring dataflow ideas into mainstream languages."
Peter Van Roy - co-author "Concepts, Techniques, and Models of Computer Programming."
Everything returning to "the server" will never happen again.
The idea that "the Cloud" is a resurrection of the timesharing terminal server of yesteryear keeps cropping up, as it did at the end of this interview. Neither the interviewer nor the interviewee mentioned the obvious objection to the idea: communication, especially Internet communication, is not reliable enough nor broad band enough to make that idea even remotely feasible.
Across most of the developed world, broadband is exceedingly unreliable, with service interruptions of multiple hours still a regular occurrence. I can't imagine having my software development environment 100% beholden to the phone company or the cable company for access. That duopoly made in hell already causes me enough trouble, without them being able to totally disable every function of my desktop PC.
Add to that the fact that so-called "broadband" connections are an asymmetrical, throttled pittance of an excuse for a high speed link everywhere except Sweden, and it becomes even less likely that we'll ever cede control of our own destinies back to some central computing provider.
Communications would have to be radically different before the idea is even worth mentioning again, and I question whether or not we'd give up what we have today even if the situation did dramatically improve. In the end, latency gets you every time. No matter how far we can reduce packet switching times, the speed of light brooks no arguments. Internet packets take 2 orders of magnitude more time to make their round trip than interactions with the resources in your desktop. That's a heavy penalty, no matter how you slice it. At the most basic level, Internet packets take 6 orders of magnitude more time than your local processor, single digit nanoseconds vs. 100+ milliseconds. We'll never give that up.
Dr. Olukotun is working on a problem that will apply both in data centers and on our here-to-stay desktops. When he and the rest of the folks at Stanford put "Pervasive" in the name of the lab, they knew what they were doing. It's easy to imagine computing becoming even more diffuse and widespread than it is now. Not only will we keep our desktops, but all kinds of new processors will start cropping up. Autonomous vehicles is only the beginning. There are enough people on the planet demanding resources that the old dumb mechanical systems of yesteryear are going to go through a forced upgrade in order to squeeze out inefficiencies. Those systems aren't too bad already, so the only slack remaining will be in making them more adaptive to the humans that interact with them. Adaptive means electromechanical, with sensors and processors.
Talk about pervasive.
I look forward to Dr. Olukotun's software making my job easier.