Google aims Goggles at Apple's iPhone
Can frenemies cooperate?
Hot Chips Google Lab's visual-search technology, Google Goggles, should be available for iPhone users later this year.
"We're working on an iPhone version, and hope to have it out by the end of the year," David Petrou, a Google staff engineer working on the Goggles project, told his keynote audience at Monday's Hot Chips conference at Stanford University in California.
Currently, Google's "Search by Sight" service is available only on Android clients, as it has been since it was introduced last December.
Petrou said that porting Goggles to clients other than Android is no mean feat. "It's actually a significant penalty [having] different code bases," he said.
But there is an alternative. "You write web apps," said the Googly dev, echoing his company's web-centric view of the world.
A web app for Goggles, Petrou said, isn't current;y an option, even with HTML5's enhanced media-capture abilities. "There is a new part of HTML5," he said, "that allows you to acquire an image from a camera. And that's really nice and really useful, but we don't think it's sufficient for something like Goggles that needs very fine control over the camera.
"The unfortunate reality is that we have to write client apps," he said. "If something were a web app, we could change and test on one per cent of our traffic, just like that."
And so Goggles will have to crawl out of its Android exclusivity by way of those pesky, time-consuming, hard-to-test client apps — and the iPhone will be the first to benefit.
And although Goggles is a technology worth using, Petrouy reminded his audience that it's still in the developmental phase. "When it works, it's very useful," he said, "but it doesn't always work."
That said, the technology — based on Google's CONGAS image-recognition engine — has acquired a database of approximately a billion images to work with, and can return a specific result on approximately 33 per cent of the queries it receives.
Considering the complexity of Google's goal of Goggles being able to identify everything, everytime, a one-third success rate — while not exactly chopped liver — leaves a lot of room for improvement.
"We still have a very, very long way to go before we meet our universal goal," said Petrou. And later this year they'll add a huge cohort of Jobsian test subjects to help them on their way.
That is, if the App Store police let Goggles into their sacred store. ®
I am waiting ..
for a Google-powered guitar.
Yes, you got it - G-string..
Why not write a client app to control the camera and then upload to the web-app for analysis and search, as it has to go online for the search anyway it's not adding /that/ much overhead as long as it keeps the image at a nice compression (though obviously not overly so as to render the analysis useless).
This way the only client apps needed would be to control the camera and the rest hooks into the web API keep differing code base development to a minimum.
Simples, as a certain meerkat would say.
Though before they go and chase Apple users, could you get the one running on my Froyo Desire to work a little better first 8)
As I said
Some level of functionality must be possible.
My android phone has a storage card. Why can't I take a picture of something I saw in a bookshop or supermarket and search for it later when I get home? Why can't Googles let me open a picture I've taken with the camera app and do the same thing? It's not rocket science.
As for maps, the same issue applies. If I ask for directions before I leave the house, why won't the app let me save the maps and directions and use them from offline mode? As I drive along it can still advise me which turn to take and so forth. If I diverge from the directions and it doesn't know where I am it can still advise me that I am 1/2 mile West of route or similar to give me some clue how to get back.
As it is it does sweet FA which makes it pretty crap.
A common mis-quote: http://www.youtube.com/watch?v=juFZh92MUOY
Isn't that how it works anyway? I certainly notice how long the "recognition" takes in 2G areas vs 3G/Wifi coverage on my Hero and the Desire.
Take the picture locally, beam it to Google's cloud (whilst displaying some mystical blue bar) and let the clusters do the analysis before spitting the result back to the End User. I also believed this was how they did Voice Search?
Your final point though is correct - it needs a lot of improvement. Whilst it can often get the 'topic' of the item, it doesn't seem to be able to recognise the item specifically, not without a barcode anyway.