This article is more than 1 year old

Microsoft Research man: It all starts with touch

Windows 8 just the beginning of 'natural' sensory input trend

Windows 8 is Microsoft’s addition to the growing tide pool of touch input for computers. Redmond's new OS joins Apple's iOS and Google’s Android in moving beyond keyboard and mouse and into touch, slide, swipe and pinch.

Touch has silently become part of the DNA of computing. Now, though, we can expect manufacturers to push the boundaries on other forms of input.

“Touch is the beginning; we are talking about natural interaction and ultimately about new modalities of interaction beyond mouse and keyboard,” Microsoft researcher Shahram Izadi told The Reg at the Touch, Gesture Motion Conference in London earlier this month.

Development cycles have been accelerated and will continue to be sped up as companies work to gain an edge over one another.

“We are seeing technologies coming to productisation more quickly because companies want to have the edge, so from a consumer point of view it’s an exciting time,” says Izadi, who jointly leads the interactive 3D technologies group at Microsoft Research Cambridge.

Izadi points to Kinect, Microsoft’s hands-free motion controller for the Xbox – which set a record for fastest selling consumer device – as one example of a new form of input that moved quickly from Microsoft Research to product.

Kinect's depth-sensing technology is believed to date from 2005 but Microsoft announced the product in 2009, saying it had licensed chip and reference designs in from sensing and recognition expert PrimeSense in March 2010. It released the Kinect in November 2010. Microsoft Research was involved in refining sound, facial and body recognition. Microsoft Research in Cambridge worked on body tracking.

Of mice and men...

It took nearly 20 years for PC makers to cotton onto the idea of the mouse, with the first Macs and PCs in the early 1980s. The first mouse was demonstrated by Douglas C Engelbart and Stanford Research Center boffins on 9 December, 1968.

Since then, the mouse and keyboard have defined how we tell a computer what to do. The iPhone changed that with touch, and then came the iPad. Now, even Microsoft is attempting to break out of the 30-year-old mould which it helped create.

Touch took a while to percolate. Legend has it that Steve Jobs had the iPhone under wraps for years as he waited for the technology – and the price at which it is sold – to catch up to speed of the manufacturing process. Work on Microsoft’s PixelSense interactive touch tablet began in 2001 and was delivered seven years later as Surface. Google started work on Android in 2003 and the first HTC handset appeared in 2008.

“Think about early work multi touch: it was in the early 1980s with people like [Microsoft’s] Bill Buxton and others. It took at least 20 years for the consumer and product world to catch up with that research. Doug Engelbart, when he invented the mouse, it took about 20 years to become commercialised,” Izadi said.

Touch-based input has existed for years, but as a niche option: usually on a screen 30 inches or larger in size, using a combination of infrared or cameras to detect input. Until now, it has been used in areas such as advertising or entertainment – just like Microsoft’s PixelSense, which was picked up by hotel chains.

And, as touch becomes more accepted, Izadi thinks the next big thing will be other forms of natural interaction – 3D interaction systems like Kinect. “I think one of the main appeals of touch is it’s the beginning of this more natural way of interacting with computers. If we go down that path it leads us to think about other forms of gestural interfaces – 3D interaction as well as touch and other modalities of interaction.”

If there’s a challenge, Izadi reckons it comes in the form of displays; despite advances in input, screens remain resolutely flat. The display side, Izadi reckons, is “dragging behind”. One idea that’s the stuff of sci-fi is the volumetric display – where objects that would usually be on a screen are represented in 3D using light or a laser.

“What’s the display experience?” Izadi asks. “Is it going to be a flat display or do we have some other way of presenting the content – such as 3D displays, auto stereo displays, volume metric displays? A lot of these 3D display technologies are lagging behind the input sensing technology at the moment. There aren’t many techs out there that are being adopted in mobile phones and I think the world is going to move beyond interacting with flat."

Tablets and phones might have inserted touch into the DNA of popular computing, but they’ve also set expectations and, like the mouse and keyboard, habitual ways of interacting with devices ... and that could take some changing. ®

More about

TIP US OFF

Send us news


Other stories you might like