Feeds

Nvidia, Continuum team up to sling Python at GPU coprocessors

Teaching snakes to speak CUDA with forked tongue, but not forked code

3 Big data security analytics techniques

GTC 2013 The Tesla GPU coprocessor and its peers inside your Nvidia graphics cards will soon speak with a forked tongue. Continuum Analytics has been working with the GPU-maker to create the NumbaPro Python-to-GPU compiler.

We all call it the LAMP stack, but it should really be called LAMPPP or LAMP3 or some such because it is Linux, Apache, MySQL, Perl, PHP, and Python. And as such, given the popularity of Python, the ability to offload sorting and calculation work from CPUs to GPU coprocessors is a big deal. (If I were going to learn one programming language today, it would be Python because of its utility as both a scripting language and a nuts-and-bolts language for creating real applications. And when I find more time, I will learn it.)

For those of you who don't know the history of the language, back in December 1989, coder Guido van Rossum of the Netherlands was bored over the Christmas holidays, so he hacked together a descendant of the ABC scripting language to run on Unix machines. He called it Python in honour of the much-loved comedy troupe's Monty Python's Flying Circus.

Python has been controlled by various organizations throughout its history, but Van Rossum, fondly known as Benevolent Dictator For Life, or BDFL, was the spiritual and technical leader of the project until he created the Python Software Foundation in 2001. At that time, Van Rossum and his cohort at PythonLabs were finishing up Python 2.0 and were also getting jobs in the commercial software field.

A decade ago, the Python Software Foundation estimated that there were somewhere on the order of 170,000 and 200,000 Python programmers in the world, about half of them in Europe. Sumit Gupta, general manager of the Tesla Accelerated Computing business unit at Nvidia, tells El Reg that the company's best estimates peg global numbers of Python programmers at a whopping 3.5 million.

According to CodeEval.com, code samples show Python to be more popular than Java

According to CodeEval.com, code samples show Python to be more popular than Java

Nvidia asked CodeEval.com, which does programming projects and contests, for some sense of what hackers prefer, and the chart above shows what programming languages were in use across more than 100,000 code samples. As you can see, Python came out ahead of Java, which has nearly three times the programmers (supposedly). The conventional wisdom is that there are around 10 million Java programmers in the world.

Nvidia did not do the Python integration with its CUDA programming environment for its Tesla GPU coprocessors and various video cards. But it helped in a way when it ditched its own C and C++ compilers for its GPUs and moved to the Low Level Virtual Machine (LLVM) toolchain back in December 2011.

The new C and C++ LLVM compilers were added to the CUDA 4.1 development kit, and gave about a 10 per cent performance boost over Nvidia's own compilers. (Which Nvidia has kept under closed source wraps except for some restricted academic licensees.)

One of the purposes of making the LLVM toolchain at the heart of the CUDA environment and tossing out its own Parallel Thread Execution, or PTX, toolchain was to get more languages supporting processing directly on GPUs. The Portland Group (PGI) Fortran compilers, which were originally done with the PTX toolchain when they came out in 2009, have been shuffled to LLVM, and now Continuum has done the work to make its Python stack hook into LLVM and speak proper GPU.

The NumbaPro tool is part of Continuum's Accelerate add-on for its commercial-grade Anaconda Python distribution. The Anaconda tool is completely free and runs on 32-bit and 64-bit Linux and Windows distributions and 64-bit Mac OS operating systems running on Intel-based Apple gear.

The Python 2.6, 2.7, and 3.3 engines are all supported in Anaconda. Accelerate costs $129, and a separate feature called IOPro - which is a fast interface into databases, NoSQL data stores, and Amazon S3 files - costs $79. Accelerate doesn't just work on GPUs, but is also used to make multicore/multithreaded x86 processors do a better job ripping through Python routines.

One of the things that native Python support for GPUs will allow is for companies to throw hardware at their software problems. Vijay Pande, a professor of chemistry at Stanford University, was cited by Nvidia in its announcement of Python support for the CUDA environment that coders in the chem labs prototype applications in Python and then recode them in C or C++ to get a performance speed up.

Now they can just say the hell with it and leave it in Python, which they say is easier to maintain than C or C++. As long as the money you spend on GPUs is less than the money you spend on recoding and the performance is better, this sounds like a win.

Gupta is not making any promises about the next programming language to be supported in CUDA, and frankly, he won't know anyway. "Once we moved to LLVM, it is pretty easy for programming tool makers to go out and do it on their own," he said.

The R stats language is probably next, however, and Nvidia has caught wind of projects at Stanford and the University of Michigan in the United States who are working on exactly this. ®

SANS - Survey on application security programs

More from The Register

next story
This time it's 'Personal': new Office 365 sub covers just two devices
Redmond also brings Office into Google's back yard
Dropbox defends fantastically badly timed Condoleezza Rice appointment
'Nothing is going to change with Dr. Rice's appointment,' file sharer promises
Bored with trading oil and gold? Why not flog some CLOUD servers?
Chicago Mercantile Exchange plans cloud spot exchange
Just what could be inside Dropbox's new 'Home For Life'?
Biz apps, messaging, photos, email, more storage – sorry, did you think there would be cake?
IT bods: How long does it take YOU to train up on new tech?
I'll leave my arrays to do the hard work, if you don't mind
Amazon reveals its Google-killing 'R3' server instances
A mega-memory instance that never forgets
Cisco reps flog Whiptail's Invicta arrays against EMC and Pure
Storage reseller report reveals who's selling what
prev story

Whitepapers

Designing a defence for mobile apps
In this whitepaper learn the various considerations for defending mobile applications; from the mobile application architecture itself to the myriad testing technologies needed to properly assess mobile applications risk.
3 Big data security analytics techniques
Applying these Big Data security analytics techniques can help you make your business safer by detecting attacks early, before significant damage is done.
Five 3D headsets to be won!
We were so impressed by the Durovis Dive headset we’ve asked the company to give some away to Reg readers.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Securing web applications made simple and scalable
In this whitepaper learn how automated security testing can provide a simple and scalable way to protect your web applications.