This article is more than 1 year old

You better explain yourself, mister: DARPA's mission to make an accountable AI

You did WHAT? WHY!

AI control

In the case of the DARPA research programme, one of the goals is to come up with machine learning techniques that can be used in the development of AI control systems for autonomous vehicles to be operated by the armed forces in future. In such a case, the military needs to be sure that the system is focusing on the right details, and be able to swiftly investigate and correct the system if it goes wrong.

"There, we envision a soldier sending one of these things off on a mission, and it'll come back, and then it'll have to explain why it did or didn't make the decisions that it made on the mission," said David Gunning, the programme manager overseeing the XAI project. Although DARPA is funded by the US Department of Defense, theprogramme involves researchers drawn from various academic institutions who will be free to publish the results of their work so that anyone will have access to the techniques they have developed.

Researchers have been following three broad strategies. The first is deep explanation, whereby modified deep learning techniques are developed that are capable of producing an explainable model. This could be achieved by forcing the neural network to associate nodes at one level within the network with semantic attributes that humans can understand, for example.

"We've got a lot of interesting proposals there, and deep learning is a hugely active research area, both in industry and universities, so there are a lot of interesting ideas on how you might produce a more explainable model from deep learning," Gunning said.

Decisions, decision

The second strategy is to use a different machine learning technique, such as a decision tree, that will produce an interpretable model.

"People are working on advanced versions of decision trees, like a Bayesian rule list is one of the techniques, so they can perform a fairly complex machine learning process, but what they produce is just a linear decision tree, so if you're trying to predict whether someone will have a heart attack, you might say that if he's over 50, his probability is 50 per cent, if he also has high blood pressure, it's now 60 per cent. And so you end up with a nice, organised human-readable version of the features that the system thinks are most important," explained Gunning.

The third approach is known as model induction, and is basically a version of the black box testing method. An external system runs millions of simulations, feeding the machine learning model with a variety of different inputs and seeing if it can infer a model that explains what the system's decision logic is. Of the three approaches, only the latter one can be applied retrospectively. In other words, if a developer has already started building and training a machine learning system, it is probably too late to adapt it to a deep explanation model. Developers would need to start out with the goal of producing an explainable AI from the beginning.

A further consideration for explainable AI is how the system can convey its decision-making process to the human operator through some form of user interface. Here, the choice is complicated by the application as well as the type of machine learning technique that has been chosen.

logo for mcubed conference

Silicon brains ready to plug into London

READ MORE

For example, the system might be designed with a natural language generator, perhaps based around something like a recurrent neural network, to generate a narrative that describes the steps that led to its output, while a system developed for image recognition tasks may be trained to highlight areas of the image to indicate the details it was focusing on.

However, one current issue with explainable AI is that there is a trade-off between explainability and performance: the highest performing machine learning models such as deep learning using layers of neural networks are typically the least explainable. This means that developers will have to make a decision about how important explainability is versus performance for their particular application.

"If you're just searching for cat videos on Facebook, explainability may not be that important. But if you're going to give recommendations to a doctor or a soldier in a much more critical situation, then explainability will be more important, and we’re hoping that we will soon have better techniques for developers to use in that case, so they can produce that explanation," said Gunning.

In other words, explainable AI is possible, but whether you need it or not depends on the application. And if that application has an impact on people's lives, it may only be a matter of time before the law demands that it be accountable. ®

More about

TIP US OFF

Send us news


Other stories you might like