IBM Watson dishes out 'dodgy cancer advice', Google Translate isn't better than humans yet, and other AI tidbits
Machines aren't really better than us at much
Roundup Hello, here's a short roundup of this week's news and announcements in AI, including worrying news for cancer sufferers, good news for human linguists and some new job opportunities.
IBM Watson cancer fail: IBM Watson has made several “unsafe and incorrect treatment recommendations” to cancer doctors using the technology, according to leaked internal documents.
An investigation by Stat News, which obtained the files, revealed the problem lies with the AI system being carelessly trained. Rather than being taught to diagnose people from real cancer cases, IBM Watson was instead trained using synthetic records by the company’s engineers and doctors working at the Memorial Sloan Kettering (MSK) Cancer Center in New York City, USA.
Medical data is difficult to obtain for privacy reasons. However, relying on fake data meant the cloud service's recommendations were based on a few experts' opinions rather than real evidence.
“This product is a piece of shit,” a doctor at Florida’s Jupiter Hospital said to IBM, according to the documents seen by Stat News. “We bought it for marketing and with hopes that you would achieve the vision. We can’t use it for most cases.”
To make matters worse, it looks as though top execs lied by telling potential customers the advice was generated from real cases, and that it was well-received by doctors.
The Register asked IBM why it pressed ahead with fake data, but it declined to comment. Before these documents were revealed by Stat News, it was reported that IBM had laid off a bunch of IBM Watson employees, and that it was struggling to win hospital contracts.
AI translators aren’t as good as human ones yet: There are technical reasons why Google’s Neural Machine Translation (NMT) model doesn’t quite trump human translators yet.
A post written by Sharon Zhou, a graduate student in Computer Science and Classics from Harvard University, explained it boils down to issues with “reliability, memory, and common sense.”
Accuracy rates in translation work are hard to judge – due to its subjective nature – and there are often times when Google Translate gets it completely wrong. There’s a whole subreddit dedicated to weird translations that seem to break the system – remember last week when dog written 18 times in Yoruba translated to a weird apocalyptic warning?
Here's the main rub: long-short term memory networks used in NMT models, such as Google's technology, can only comprehend a limited amount of information at any one time, working from sentence to sentence, which means the thread of context is lost when translating passages of text.
“NMT systems have really acute short-term memory loss," wrote Zhou. "Currently, we have built our systems aimed at translating one sentence at a time. As a result, they forget information gained from prior sentences."
And here's the second rub: compared to human translators, machines have almost zero common sense. They don’t understand language in the same way that humans do, and have no knowledge of the world. So it’s difficult for them to follow conversations in text in the same way, and translate meaning. Computer software has to be better than matching words from one dictionary to another with a little contextual glue.
You can read all about it here.
DeepMind Chair of Machine Learning position: The University of Cambridge, in England, announced it was creating a new DeepMind Chair of Machine Learning position after it was given a wad of cash by the Alphabet offshoot.
You can come up with all sorts of fancy titles if you’ve got money, it appears.
“I have many happy memories from my time as an undergraduate at Cambridge, so it’s now a real honour for DeepMind to be able to contribute back to the Department of Computer Science and Technology and support others through their studies,” said Demis Hassabis, CEO and co-founder of DeepMind.
“My hope is that the DeepMind Chair in Machine Learning will help extend Cambridge’s already world-leading teaching and research capacities, and support further scientific breakthroughs towards the development of safe and ethical AI.”
The position is expected to be filled in October 2019, and the new chair will remain at the university to conduct research. DeepMind is also supporting four Masters students from underrepresented groups studying machine learning and computer science at Cambridge University next year. ®
Sponsored: Becoming a Pragmatic Security Leader