This article is more than 1 year old

Real-time drone videos get GPU-tastic

Swimming in sensors, drowning in data

HPC blog Here at the GPU Technology Conference (GTC 2012), you see a lot of things that you didn’t think were quite possible yet. Case in point: cleaning up surveillance video.

The standard scene in “24” or any spy thriller is of agents poring over some grainy, choppy, barely-lit video that’s so bad you can’t tell whether it’s four humans negotiating an arms deal or two bears having an animated conversation about football. In the Hollywood version, the techno geek says, “Let me work on this a little bit,” and suddenly things clear up to the degree that not only can you see the faces clearly, you can tell when the guys last shaved.

Cleaning up and enhancing video is a tall order, compute-wise – and doing it in real time? Hella hard. But I just saw a demo of that in a GTC12 session run by MotionDSP. Their specialty is processing video streams from mobile platforms (think drones and airplanes) on the fly. We’re talking full motion, 30 frames per second video streams that are enhanced, cleaned up, and highly analysable in real time.

The amount of processing they’re doing is incredible. Lighting is enhanced, edges are enhanced, jitter is taken out, and the on-screen metadata (time, location, speed, etc) is masked. Again – all in real time.

The effect is profound. In the demo, what was once just a vague gray ship (which seemed to be vibrating like a can in a paint-shaker) was clarified so that you could easily see what kind of ship it was and also see two suspicious figures milling around on deck. To me, it looked like there were enough pixels to enhance the video even further – to the point where we could identify the figures.

As the folks from MotionDSP explained, processing at this speed simply isn’t possible without using GPUs. Cleaning up a single stream of video to that degree takes 160 gigaflops of processing power. A single GPU card (Fermi, assumedly) can handle two simultaneous HD streams or four to six standard definition streams.

Not surprisingly, MotionDSP's biggest customers are various branches of the US government (Air Force, Naval Special Warfare Group, and lots of other secret acronym agencies). In fact, the “swimming in sensors, drowning in data” quote is from a general (I think) talking about their struggle to take advantage the masses of data provided by their sensor platforms.

Check out the demo views on the MotionDSP website; it’s interesting stuff for sure. While the early applications are typically military surveillance, how far off is the day when we’ll see this technology used to make other videos more clear?

I’m thinking about the typical YouTube video shot from a helmet cam worn by some kid on a bike at the top of a huge mountain. What’s always detracted from my viewing experience is the way the video gets so shaky and distorted after he loses his balance and starts to tumble down the mountainside. Sure, the first hit is clear, and maybe the first loop, but once he picks up speed, there’s just too much distortion. Hopefully, MotionDSP will release an edition at a price scaled to the amateur stunt man. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like