GPUs slick up with oil sleuths
Mind-boggling data streams
I stopped by the Oil & Gas track at the 2010 GPU Tech conference this morning and learned quite a bit about the key drivers on the exploration side of the industry. I already knew the key drivers on the distribution side of the business - potato chips, watery fountain drinks and herbal energy pills - but that was presumably covered in a different break-out session. In this session, the speaker, from the exploration arm of oil giant Schlumberger, did a great job of laying out the big picture and relating it to their computing challenges.
It breaks down like this: we're hungry for oil and they need to find more of it. The costs of finding and extracting black gold have escalated as the easy stuff laying around the surface has already been found. While there is lots of oil out there, it's either still hiding or is buried beneath deep oceans or under piles of rocks. Finding it and pulling it out is where computers come in handy.
I've heard the term seismic processing for years and understand the concept - it's where you send waves into rocks and see how long it takes them to bounce off the rocks and hit receivers located somewhere else. Do this enough and you'll build up a good picture of what's located in the various strata underneath the surface. The more waves you send at a higher frequency, the better the picture. But this tends to send the amount of data you end up processing through the roof.
For example, the average ship running seismic gear has between 20,000 and 25,000 sensors on board, and you typically use several ships in concert to survey an area. This will yield anywhere from 50 to 200TB of data per run and take five to seven days of solid processing on a large number of systems to get results. If you ramp up the resolution, it can take 15,000-20,000 compute nodes running days or weeks to complete the job.
The competitive advantage for the surveying company comes from delivering high-quality results quicker than the next guy. Computing power is critical in winning that race. These oil and gas guys are brand agnostic to the extreme - they buy what yields the best price/performance (with an emphasis on performance) at any given time. Sometimes that means Intel, sometimes it means AMD - but right now, it means GPUs. Lots of GPUs, in fact.
Between June and October 2009, they almost doubled their overall capacity by adding GPU compute capacity and since then have doubled it again. They've seen about a six-fold reduction in overall cost and a five-fold increase in performance on their algorithms. According to the speaker, they didn't have much problem porting their code or performance tuning it to run under CUDA. Their analytical tools are a fairly limited set and all are embarrassingly parallel, making them a near perfect fit for the GPU computing model.
Getting such a non-qualified endorsement for GPU computing isn't surprising at the GPU Technology Conference, right? But it's a more compelling story when it comes from real world practitioners, rather than marketing slide monkeys or coin-operated sales people. ®
Sponsored: Magic Quadrant for Client Management Tools