This article is more than 1 year old

Get ready for software-defined RADAR: Jam, eavesdrop, talk and target ... simultaneously

Multi-GPU wizardry from General Electric

HPC blog

With a big RF transmitter and enough fast computing power, you have the ability to do a lot of different things, as evidenced by a General Electric presentation on "software-defined radar" at the GPU conference this year.

At GTC 13 last year, GE gave a standing-room-only presentation about how it's using RDMA (Remote Direct Memory Access) to drive multi-GPU process performance to new heights. The firm was back this year to talk about new and innovative applications of GPU tech it has cooked up over the past year.

In its session, Dustin Franklin, GE GPU Applications Engineer guru, gives us an update on how it has been proceeding with RDMA and how it allows the electric company to build large scale, multi-node, products.

What's really interesting are the types of products that this is now making possible. For example, consider software-defined radar. With a big RF transmitter and enough fast computing power, you have the ability to do a lot of different things.

For example, the same radar dome can be used for MTI (Moving Target Radar), SAR (Synthetic Aperture Radar), radar-jamming, and even as a communications channel. Using GPUs to configure the output and interpret the returning waves, GE has found that it’s possible to do all of these functions simultaneously, if necessary.

How it works: Simultaneous transmit/receive for a whole load of functions

In the past, each of these functions would require dedicated DSPs (Digital Signal Processors) or FPGA processors, developed at the cost of hundreds or thousands of man hours. With software-defined radar, the same hardware is used in multiple ways and can be quickly re-configured to better handle new tasks and requirements.

Franklin also talked about how GE will be using the new Tegra K1 GPU device to make compact and ruggedized products for the battlefield and beyond. He also speculates about how having 325 GFlop/s of performance in a sensor (delivered by 192 CUDA cores plus 4 ARM cores) could change the game when it comes to how we use sensors.

Insert chip here: Some of GE's suggested applications for the Tegra K1 battlefield GPU

Typically, sensors gather data and deliver it somewhere else for processing and interpretation. Sensor output is then delivered to interested parties. But the Tegra K1 packs a lot of processing punch in a 5-7 watt package, so why not have it handle processing and interpretation tasks, then deliver the output to the folks who need it? This could significantly reduce the time needed to get vital data to the people who need it the most.

Towards the end of the session, we learned that GE has the world’s first Tegra K1 demonstration box in their booth on the show floor. It’s a combination streaming video/LIDAR device that shows and tracks targets (show floor traffic in this case). I’ll have some video on this in the next few days.

To finish up, Franklin discussed GE experience with Kepler vs. Maxwell GPUs, including the architectural and performance differences. He shows some of their internal benchmark results and talks about the implications.

Take a look at the 20-minute presentation to get a glimpse at the cutting edge of defence technology. You’ll want to watch if you’re a defence minister who wants to see what’s coming down the road, or perhaps if you’re the head of a junta who might be coming up against Western military forces in the near future. ®

More about

TIP US OFF

Send us news


Other stories you might like