IBM pits Watson super against humanity
This time, we're in Jeopardy
"This downtown boy met and married an uptown girl." On the popular Jeopardy game show, such a statement functions like a question in terms of being a piece of input data that expects one and only one bit of output data, in this case an actual question: "Who is Billy Joel?"
While playing the Jeopardy game is not all that difficult for human beings, being good at such trivia is decidedly non-trivial. For a supercomputer - even one crammed full of data - being able to suss out the double-entendres and little nuances of the statements and come up with the question is a very tough challenge. According to researchers at IBM, playing Jeopardy is in fact one of the grand challenges of computing - which is why a BlueGene massively parallel supercomputer, nicknamed Watson, is being trained to play the game and to take on people live in the studio in 2010.
The Jeopardy challenge is akin to the Deep Blue parallel supercomputer taking on chess champion Gary Kasparov. In 1996, it lost, but it won a six-game match by one game (two wins for Deep Blue, one for Kasparov, and three draws) the next year.
Back in the mid-1990s, IBM was a joke in the supercomputing racket, and Cray and Silicon Graphics were building the truly innovative and powerful machines and cheap x86-Linux clusters were just starting to get traction. IBM's Power2 and PowerPC processors were 32-bit chips with nothing much impressive about them and its 64-bit PowerPC 620 and 630 parts were utter failures and would not come out for several years. (Ironically, the 64-bit PowerPC AS chips that were tucked inside of AS/400 minicomputers in the summer of 1995 were excellent processors, but IBM's Unix server nerds didn't see the wisdom of using these chips until 1997. Those AS/400 Power chips form the basis of IBM's advances in the Unix space).
So back when Deep Blue was announced, to say that IBM had something to prove is a bit of an understatement. IBM had to show that it could do parallel supercomputing, and the chess match with Kasparov is probably the smartest PR stunt that Big Blue has pulled since it was incorporated in 1911.
With the Watson machine and the Jeopardy challenge, IBM doesn't have to prove it knows supercomputing. In the past decade, IBM has put its system engineers, scientists working at IBM research facilities around the globe, and numerous supercomputing experts from government and academic labs to build a portfolio of different parallel computing platforms, including the massively parallel BlueGene to the hybrid x64-Cell blade architecture embodied in the "Roadrunner" to giant clusters of its commercial Power Systems such as the future "Blue Waters" Power7 monster.
What IBM is trying to prove with the Watson supercomputer - and the question answering (QA) software it is developing to run atop it - is that computers can be pumped full of textual data from many different sources and answer questions. The flipping around of question and answer like Jeopardy does changes the basic problem very little, according to David Ferrucci, Watson project lead and the principal researcher working on the Watson QA software and the iron that will support it.
"We're trying to get the computer to deal with natural language more effectively," says Ferrucci. "Since Jeopardy is such a large domain, it is like we are trying to get the computer to study. Of course, the challenge is that the game has such a broad domain and people play with such confidence."
The Watson QA system software has been in development for the past two years and is based on open source code created by IBM's Software Group called Unstructured Information Management Architecture (UIMA), which is available as an Apache project. As the name of this code (a framework, really) suggests, UIMA is designed to bring some order to unstructured data so it can be analyzed and that analysis can be used to drive decisions.
It is the backbone of something that might be called an answer engine (something that processes all kinds of unstructured and structured data and comes up to an answer to a direct question) as opposed to a search engine (something that collects data and indexes it in such a way that with multiple search terms you can find bits of structured and unstructured data that you might use to come up with an answer). The confidence in an answer engine is what differentiates it from a search engine, and it is also what distinguishes a Jeopardy player who hits the buzzer quickly and asks the question to a statement and therefore wins the money.
By the way, the Watson QA system software is not based on or derived from the InfoSphere Streams software or the System S streaming server that IBM announced as a prototype at Toronto Dominion Bank three weeks ago as a very fast options trading system that can take in data from many different data streams - stock tickers, news feeds, video, etc. - and use it to drive stock buying and selling decisions.
While the System S and Watson QA machines use different software, they are both based on the same hardware. In this case, it is the BlueGene/P massively parallel supercomputer, which is based on a processor card that has four 850 MHz single-core PowerPC 450 chips and 2 GB of DDR2 main memory linked by symmetric multiprocessing. A single rack of the BlueGene/P has 1,024 of these four-core processor nodes and is rated at around 13.9 teraflops of number-crunching performance. Of course, not all of the work that the Watson QA machine is doing will be crunching numbers, but processing text and organizing it to speed up how questions to Jeopardy statements are created.
The Watson QA system is based on various forms of textual data (books and other types of authoritative data) that will be pumped into the system and organized based on the general Jeopardy question categories. Most of the system is programmed in Java, according to Ferrucci, but there is a smattering of C++ where performance is critical and Prolog is also used for some of the rules relating to textual analysis.
The parallel processing in the system is not only used to find bits of relevant data quickly, but also to score relevant bits that support it as being part of the question relating to a Jeopardy statement and then to come up with the resulting question and then to assign it a score on how confident the Watson QA system is in its answer.
Sometimes, the Watson machine will not hit the buzzer because it is not confident in the results it gets, and sometimes, it will be confident in the answer and, like the rest of us from time to time, be wrong. One of the tricks, according to Ferrucci, is to determine the optimum amount of information that allows the Watson QA system to come up with the right questions. Too much data (particularly if it is conflicting) can be as bad a thing as too little.
Just like the Deep Blue chess playing supercomputer stored a gazillion possible chess games and moves and sorted through those for the best possible moves based on where it was in a real game, the Watson QA system is being fed with likely statements Jeopardy game host Alex Trebek will make and the possible questions relating to those statements. (In some bases, past Jeopardy statements are being pumped into the machine, in fact). The UIMA framework is what is used to do deep analytics on the raw data ahead of time, and it is also used to parse the statements to "understand" them.
The exact configuration of the Watson QA super has not yet been determined, and it is not clear if the machine will be equipped with electronic ears to hear and speakers to talk as it responds. The machine will be put in the Jeopardy studios and will be disconnected from the Internet - no cheating with Google allowed by humans or computers.
As is always the case with Big Blue, the Jeopardy challenge is ultimately about not just some free PR, but doing some big business. The code behind the Watson QA system will undoubtedly end up front-ending a lot of enterprise applications, wherever decision makers want a little assistance and maybe a second opinion on a decision they want to make. In many cases, it is hard to imagine a computer doing worse than what some living and breather managers have done to businesses, but these people are our relatives and we don't want them living with us.
Here's to hoping that Watson QA gets its clock cleaned by some reigning Jeopardy champs. I am putting my money - and perhaps humanity's future - on Brad Rutter, the five-time and undefeated Jeopardy champ who also won three Jeopardy tournaments. Rutter is a self-described "slacker" from Lancaster, Pennsylvania, who dropped out of Johns Hopkins University and aside from working in a local record store, has made $3.3m playing Jeopardy. I get the feeling I am rooting for the underdog here.
IBM is not pounding its chest just yet on its ability to win the Jeopardy throw-down. But after two years and by throwing the gauntlet down this morning, Big Blue must be pretty confident. So can Watson QA win at Jeopardy? "We'll see," says Ferrucci with a laugh. "We certainly wouldn't be in the game if we didn't think we had a good chance." ®