This article is more than 1 year old

PAH! Four decades of Star Wars: No lightsabers, no palm-sized video calls

Sort of. Leia's a New Hope

Star Wars New Hope @ 40 When Lucasfilm recently unveiled its tribute reelto the late Carrie Fisher, one of the most memorable monologues in cinema sat right in its center.

“General Kenobi. Years ago, you served my father in the Clone Wars... Now he begs you to help him in his struggle against the Empire...”

Reading those words, we can see the princess, tiny and shrouded in projected light.

Princess Leia hologram

At the time of the US release of Star Wars, Episode IV, a New Hope forty years ago on May 25, that hologram princess seemed the purest movie magic, brought to life through the wizardry of special effects. In the decades that followed, Moore’s Law has ground away: forty years means our transistors are somewhere around a million times smaller, cheaper and faster than when Lucas was filming during the mid 1970s, and a hologram that seemed so fanciful then has now come within reach.

In that spirit, let’s put ourselves into Princess Leia’s position – needing to send a message via hologram. Where do we start?

Capturing the essence

A bit more than 20 years ago, a paper presented at the annual SIGGRAPH conference on computer graphics – where graphics boffins share their research – described a new technique to create “realistic three-dimensional models from a series of two-dimensional photographs.”

The basic idea was simple: take enough snaps from enough different angles, then leave it to the computer to “knit” all of the photos into a coherent multi-angle representation. After a fierce calculation, software would extrude 3D shapes from the photos. Take more photos, get a more accurate shape – at the cost of more computer time.

Once difficult and expensive, this technique, known as photogrammetry, has become as cheap as chips – or at least as cheap as an HTC headset and a PC. Free software packages can now knit photographs together and whip out a 3D model for viewing on screen or in a VR system.

Earlier this month, Valve transformed its Steam VR environment into an open-to-all-comers photogrammetry platform, allowing all Steam VR users to explore these exquisitely detailed moments in time. HTC delivers you a resolution of 1080x1200 per eye with a refresh rate of 90Hz and 110-degree field of view. These worlds are beautiful – but static. Statues in a museum.

Our princess, vitally alive, needs something that will capture both image and essence, a dynamic reflection of her plea. We need a media of motion, something with a timeline.

It still takes seconds to transform a single captured moment into photogrammetry. Going to thirty frames a second represents a problem of an entirely new order. Moore’s Law has been good to us – but is it that good?

Early last year I learned it was.

At the Los Angeles offices of a startup called 8i, I watched a crusty old boxing coach as he went through the basics of stance, posture, and balance. This wasn’t video. He was right there, in front of me, tangibly filling space, and seeming very nearly real. Nothing uncanny about it, no tinge of creepiness, as if digital golem had been given an unnatural life by an animator. This is videogrammetry. My first “hologram” possessed the same verisimilitude as a photograph or video – with volume and depth.

We need to be very careful of a confusion of terms. Holography is a three-dimensional imaging technique using coherent light beams (generated by lasers) that recreates an image from an interference pattern. These “holograms” created via videogrammetry are nothing of the sort. The only thing “holograms” have in common with holography is that both generate a three-dimensional representation.

But the princess isn’t picky. These faux holograms are good enough – and they’re available today.

To make one of these videograms you start with a lot of cameras. A typical videogrammetry rig has upwards of forty high-definition cameras, different angles onto a subject being captured, all of them cranking out images thirty times a second, generating terabytes of data every hour.

That data must be knit together, processed and composed into a hologram, multiplying the computational intensity of photogrammetry by terabytes of FHD video shot at thirty frames a second. It’s the sort of problem even a data centre would struggle with.

Founder and CEO of videogrammetry startup Humense, Scott O’Brien, reckons the princess’ ship would be involved. “If we have depth sensors that know about each other, and correct for each other, with processing going on in R2D2, we’d have the resources to capture videogrammetry.” R2D2 would utilise some of the computing capacity on board Tantive IV (the Princess’ captured transport), tapping into the vessel’s surveillance cameras, capturing the bits of the Princess facing away from the droid.

With current tech, R2D2 would still be waiting for the
progress bar to fill while Vader did his worst...
Pic by Shutterstock

It could be done – but how long would it take to post-process her message? Too long: Vader would capture the droids while R2 waited for the progress bar to fill. Quickly we go from New Hope to No Hope.

Recently, boffins at Microsoft Research showed how to combine Kinect-like depth-sensing-cameras with photographic captur e to generate videogrammetry in real time. A depth camera – already on Google’s Tango devices, and rumoured to be a feature of the next iPhone – provides much of the information generated by conventional photogrammetry calculations. Although neither is pretty nor as convincing as compute-intensive videogrammetry, it’s here today – and fast enough for the Princess to record her message, then dispatch R2D2 to Tatooine.

Next page: Invisible pictures

More about

TIP US OFF

Send us news


Other stories you might like