Apple seeks patent on reality
3D revolution in the head
Head tracking is at the core of Apple's latest 3D patent, prosaically entitled "Systems and Methods for Adjusting a Display Based on the User's Position". We say 'latest' patent because Apple has been investigating 3D interfaces for some time. Almost exactly one year ago, for example, The Reg reported on an Apple patent filing entitled "Multi-Dimensional Desktop". That filing, however, didn't benefit from head-tracking, and the user interface that it described was merely a 2D representation of a 3D space - the objects in it maintained their spatial relationships no matter where your head was.
Today, you view a window-filled display from the center, but...
...in the future you may only need to tilt your head to see the same windows in a different spatial relationship.
Thursday's patent - which was originally submitted in June 2008 - goes much further. It creates an immersive 3D representation of a space complete with objects that can be either within that space or extend beyond it and appear to the user to be "outside" the display - that is, closer to the user than is the physical surface of the display itself.
And unlike in Lee's Carnegie Mellon video, the user does not necessarily need to wear or be equipped with some sort of signal-generating or receiving device. As the filing states in pure patentese, "The sensing mechanism may be operative to detect the user's position using any suitable sensing approach, including for example optically (e.g., using a camera or lens), from emitted invisible radiation (e.g., using an IR or UV sensitive apparatus), electromagnetic fields, or any other suitable approach."
The inclusion of a camera among those possible sensing devices leads to a second trick beyond modifying displayed objects' spatial relationships: moving images of elements of the user's environment into the display itself.
3D charts may actually have a function other than decoration...
...if you can easily examine them from different angles
As the filing puts it, the system "may detect the user's environment and map the detected environment to the displayed objects." Doing so, the system could, for example, display a reflective object with the user's environment reflected upon it.
In addition, a database of object-oriented metadata could prompt the system to perform transformations upon a recognized camera-viewed object based on predetermined parameters. You could, for example, instruct your computer that when it recognized you it should perform a mild Gaussian blur to smooth your wrinkles, apply a translucent color=#E7B9A0 layer to give your skin a just-back-from-the-islands glow, then display your immersive 3D software self to your actual 3D liveware self as an onscreen avatar with digitally enhanced wholesomeness. ®
Sponsored: Optimizing the hybrid cloud