Watson? Commercial – not super – computer
Off-the-shelf gear ‘gets’ humans
Now that IBM’s Watson has pounded the best human Jeopardy competitors into a fine slurry, let’s take stock. Our human proxies took their ass-kicking in good spirits, with Ken Jennings writing on his ‘Final Jeopardy’ card, “I for one welcome our new computer overlords.” (For the sake of adding a bit more inane trivia, the Jeopardy answer for his phrase would be “Who was Kent Brockman on The Simpsons?”)
And I believe I’ve found the best Register reader comment so far: In response to this story’s subtitle, “Robots will keep us as pets,” came this clever bit: “There already are humans kept as pets by machines - they're called ‘iPhone owners!’” LinkOfHyrule, we’re not worthy.
It’s gratifying to see so much coverage of a tech story in the non-tech media, though some of it is frustrating as well. A good many of our fellow carbon-based life forms refer to Watson as a “supercomputer” and laud its ability to do lightning-fast “searches”. Neither of these describe what Watson is or what Watson does.
First of all, it’s not a supercomputer. It’s a commercial system – or rather, a bunch of commercial systems lashed together for parallel processing purposes. The hardware is readily available POWER-based gear that can run either IBM’s AIX Unix operating system or Linux.
It’s the same box that’s running commercial apps like SAP and Oracle in thousands of companies. Watson is made up of 90 4-socket IBM Power 750 systems with 360 8-core POWER7 processors running at 3.55GHz with 16 GB of memory per server. The systems are connected together via 10 GbE networking.
There is also a misconception about how Watson comes up with answers – it’s not ‘searching’ for them as we typically think of the search process. You can’t do that with many Jeopardy questions due to their indirect nature.
After Watson is asked a question, it analyzes the question and topic, and pulls hundreds of possible “candidate” answers from hundreds of sources, and then begins hypothesis generation. Thousands of pieces of “evidence” are sorted to weigh the validity of the candidate answers.
These candidate answers and their “proofs” are scored and synthesized using deep analysis algorithms to create answer “models” from which the final answer choices – and Watson’s confidence in each – are derived. Watson, of course, goes with the “highest-confidence” response. On Jeopardy, this process took place before the human contestants who knew the answers could hit their buzzers.
No supercomputer required
What is coming across in the media, fortunately, is why all of this matters: real-time answers that are accurate despite the human frailty of the information we provide and the questions we ask. We human types are ambiguous. We have nearly endless ways to say the same thing. Our statements and questions are unstructured, and must be interpreted through the context in which they’re made.
Computers want things to be black or white. They have to go to great lengths (as we can see above) to be able to figure out the meanings in human statements or questions. Humans are great at processing ambiguous, unstructured data; our brains are wired to see patterns and put together theories as to why those patterns occur. Computers are great at doing the grunt work of sifting through masses of evidence to either support or disprove our theories.
This Jeopardy exercise isn’t about computers besting humans. It’s really about how collections of computing hardware and software can be optimized to understand humans better, and to understand what we’re trying to get them to do. A lot of time, effort, and money is expended in getting real-world data into a form where it can be understood and processed by digital devices like computers. Watson is the best recent example of a machine crossing over the divide between human and machine-style thinking.
This means that in the future, we’ll be able to spend more time on actual human work and less time on generating digital-compatible data to feed the machines. This will pay concrete dividends even in the near term. Information from thousands of patients’ vital signs and millions of clinical reports and doctors’ notes could be synthesized to provide diagnoses that aren’t guesswork.
Businesses can make sense of staggering amounts of data that have been “noise” until now. Who knows – maybe our consumer information and requests and incoherent rants could be analyzed in such a way that we get actual help from a help desk. No supercomputer required. ®
Sponsored: Network DDoS protection