Moviemakers on a quest for their real-time 3D Holy Grail
Windows or Linux to build a film frame by frame, server by server?
The massive blockbuster Avatar reintroduced 3D to the 21st century.
The big difference from the previous 3D invasions was digital technology. Optically and physiologically the principle was the same: pairs of frames, representing a left eye view and a right eye view, are presented (near-) simultaneously to the viewer. The 3D picture then gets assembled by the human brain.
Animation's come a long way from Toy Story to Avatar
But not, apparently, effortlessly. A study at California State University showed that watching a 3D movie tripled the risk of headaches and eyestrain among the 400 movie-goers taking part. The 3D cinema image is dark - light is lost through the 3D lens and through the polarised glasses. More fundamental is the "convergence/focus issue" discussed by Oscar-winning movie editor Walter Murch in Roger Ebert's Chicago Sun-Times column, uncompromisingly entitled "Why 3D doesn't work and never will. Case closed".
But Jim Mooney welcomes 3D if only because it doubles the number of frames his customers require him to render. He's the director of Render Nation, a managed video render farm in Liverpool's Innovation Park, founded in 2005 as one of very few at the time offering third-party rendering to the video industry.
There are several ways of arriving at the kind of 3D that doubles Mooney's business and Murch is so roundly condemning. Animated movies these days are typically created in a 3D world of textures, perspective and lighting, from which it's relatively simple to derive either a 2D or a 3D final print.
Real-world photography involving actors is a very different proposition. One way is to fake it. Shoot in conventional 2D, and then, as the movie industry saying goes, "fix it in post production". Dimensionalising is the industry's polite term for this. But doesn't it make for lousy 3D?
"Yes, dimensionalised movies have been slated," says Steve Prescott, group director of technology at Soho's post-production house, Framestore. "But it's usually because they've been rushed through post." Although some TVs purport to be able to dimensionalise conventional 2D on the fly, it's not something you fully entrust to software.
Prescott says it takes an outfit like Prime Focus, a huge post-production house with a 65,000-square-foot HQ in Mumbai, India. "They have thousands of people there, mapping out objects and projecting them onto models converting 2D to 3D," he said.
Preferably 3D is shot with two side-by-side cameras. "Using a stereo camera rig - an unwieldy thing that takes a lot of setting up," says Prescott. "Get the angle that the cameras are converging at wrong and the shoot can be unsalvageable. A bit of a shock for people who think everything can be fixed in post."
Andy Howard, Prescott's chief engineer at Framestore, confirms that 3D for theatrical presentation seems to be tailing off. The big switch has been about digital rather than about 3D.
"It's changing so fast," he says. "In February of this year, 70 per cent of the material delivered to us by clients for post production work was still being shot on 35mm film." By September the ratio had completely flipped: "Now 70 per cent is coming through from electronic cameras."
The workflow inside Framestore has always been digital, but now the need to run incoming 35mm film through a telesync device is fading. Another example of the receding tide of 3D leaving digital in its wake?
Sponsored: Network DDoS protection