Feeds

GPUs slick up with oil sleuths

Mind-boggling data streams

Secure remote control for conventional and virtual desktops

I stopped by the Oil & Gas track at the 2010 GPU Tech conference this morning and learned quite a bit about the key drivers on the exploration side of the industry. I already knew the key drivers on the distribution side of the business - potato chips, watery fountain drinks and herbal energy pills - but that was presumably covered in a different break-out session. In this session, the speaker, from the exploration arm of oil giant Schlumberger, did a great job of laying out the big picture and relating it to their computing challenges.

It breaks down like this: we're hungry for oil and they need to find more of it. The costs of finding and extracting black gold have escalated as the easy stuff laying around the surface has already been found. While there is lots of oil out there, it's either still hiding or is buried beneath deep oceans or under piles of rocks. Finding it and pulling it out is where computers come in handy.

I've heard the term seismic processing for years and understand the concept - it's where you send waves into rocks and see how long it takes them to bounce off the rocks and hit receivers located somewhere else. Do this enough and you'll build up a good picture of what's located in the various strata underneath the surface. The more waves you send at a higher frequency, the better the picture. But this tends to send the amount of data you end up processing through the roof.

For example, the average ship running seismic gear has between 20,000 and 25,000 sensors on board, and you typically use several ships in concert to survey an area. This will yield anywhere from 50 to 200TB of data per run and take five to seven days of solid processing on a large number of systems to get results. If you ramp up the resolution, it can take 15,000-20,000 compute nodes running days or weeks to complete the job.

The competitive advantage for the surveying company comes from delivering high-quality results quicker than the next guy. Computing power is critical in winning that race. These oil and gas guys are brand agnostic to the extreme - they buy what yields the best price/performance (with an emphasis on performance) at any given time. Sometimes that means Intel, sometimes it means AMD - but right now, it means GPUs. Lots of GPUs, in fact.

Between June and October 2009, they almost doubled their overall capacity by adding GPU compute capacity and since then have doubled it again. They've seen about a six-fold reduction in overall cost and a five-fold increase in performance on their algorithms. According to the speaker, they didn't have much problem porting their code or performance tuning it to run under CUDA. Their analytical tools are a fairly limited set and all are embarrassingly parallel, making them a near perfect fit for the GPU computing model.

Getting such a non-qualified endorsement for GPU computing isn't surprising at the GPU Technology Conference, right? But it's a more compelling story when it comes from real world practitioners, rather than marketing slide monkeys or coin-operated sales people. ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
Ellison: Sparc M7 is Oracle's most important silicon EVER
'Acceleration engines' key to performance, security, Larry says
Linux? Bah! Red Hat has its eye on the CLOUD – and it wants to own it
CEO says it will be 'undisputed leader' in enterprise cloud tech
Oracle SHELLSHOCKER - data titan lists unpatchables
Database kingpin lists 32 products that can't be patched (yet) as GNU fixes second vuln
Ello? ello? ello?: Facebook challenger in DDoS KNOCKOUT
Gets back up again after half an hour though
Hey, what's a STORAGE company doing working on Internet-of-Cars?
Boo - it's not a terabyte car, it's just predictive maintenance and that
prev story

Whitepapers

A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.